00:00:00.001 Started by upstream project "autotest-per-patch" build number 126103 00:00:00.001 originally caused by: 00:00:00.001 Started by user Chachulski, JaroslawX 00:00:00.080 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.081 The recommended git tool is: git 00:00:00.081 using credential 00000000-0000-0000-0000-000000000002 00:00:00.088 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.111 Fetching changes from the remote Git repository 00:00:00.113 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.138 Using shallow fetch with depth 1 00:00:00.138 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.138 > git --version # timeout=10 00:00:00.167 > git --version # 'git version 2.39.2' 00:00:00.167 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.193 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.193 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.410 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.421 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.435 Checking out Revision 308e970df89ed396a3f9dcf22fba8891259694e4 (FETCH_HEAD) 00:00:04.435 > git config core.sparsecheckout # timeout=10 00:00:04.446 > git read-tree -mu HEAD # timeout=10 00:00:04.462 > git checkout -f 308e970df89ed396a3f9dcf22fba8891259694e4 # timeout=5 00:00:04.479 Commit message: "jjb/create-perf-report: make job run concurrent" 00:00:04.479 > git rev-list --no-walk 308e970df89ed396a3f9dcf22fba8891259694e4 # timeout=10 00:00:04.610 [Pipeline] Start of Pipeline 00:00:04.623 [Pipeline] library 00:00:04.624 Loading library shm_lib@master 00:00:04.624 Library shm_lib@master is cached. Copying from home. 00:00:04.640 [Pipeline] node 00:00:04.653 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.654 [Pipeline] { 00:00:04.664 [Pipeline] catchError 00:00:04.665 [Pipeline] { 00:00:04.678 [Pipeline] wrap 00:00:04.688 [Pipeline] { 00:00:04.696 [Pipeline] stage 00:00:04.698 [Pipeline] { (Prologue) 00:00:04.878 [Pipeline] sh 00:00:05.187 + logger -p user.info -t JENKINS-CI 00:00:05.206 [Pipeline] echo 00:00:05.208 Node: CYP9 00:00:05.215 [Pipeline] sh 00:00:05.516 [Pipeline] setCustomBuildProperty 00:00:05.527 [Pipeline] echo 00:00:05.528 Cleanup processes 00:00:05.532 [Pipeline] sh 00:00:05.812 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.812 1752610 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.823 [Pipeline] sh 00:00:06.106 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.106 ++ grep -v 'sudo pgrep' 00:00:06.106 ++ awk '{print $1}' 00:00:06.106 + sudo kill -9 00:00:06.106 + true 00:00:06.122 [Pipeline] cleanWs 00:00:06.134 [WS-CLEANUP] Deleting project workspace... 00:00:06.134 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.142 [WS-CLEANUP] done 00:00:06.146 [Pipeline] setCustomBuildProperty 00:00:06.157 [Pipeline] sh 00:00:06.442 + sudo git config --global --replace-all safe.directory '*' 00:00:06.507 [Pipeline] httpRequest 00:00:06.539 [Pipeline] echo 00:00:06.540 Sorcerer 10.211.164.101 is alive 00:00:06.548 [Pipeline] httpRequest 00:00:06.551 HttpMethod: GET 00:00:06.552 URL: http://10.211.164.101/packages/jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:06.552 Sending request to url: http://10.211.164.101/packages/jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:06.568 Response Code: HTTP/1.1 200 OK 00:00:06.569 Success: Status code 200 is in the accepted range: 200,404 00:00:06.569 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:11.840 [Pipeline] sh 00:00:12.132 + tar --no-same-owner -xf jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:12.151 [Pipeline] httpRequest 00:00:12.185 [Pipeline] echo 00:00:12.187 Sorcerer 10.211.164.101 is alive 00:00:12.197 [Pipeline] httpRequest 00:00:12.202 HttpMethod: GET 00:00:12.202 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:12.203 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:12.228 Response Code: HTTP/1.1 200 OK 00:00:12.229 Success: Status code 200 is in the accepted range: 200,404 00:00:12.229 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:42.660 [Pipeline] sh 00:00:42.949 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:45.511 [Pipeline] sh 00:00:45.799 + git -C spdk log --oneline -n5 00:00:45.799 719d03c6a sock/uring: only register net impl if supported 00:00:45.799 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:45.799 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:00:45.799 6c7c1f57e accel: add sequence outstanding stat 00:00:45.799 3bc8e6a26 accel: add utility to put task 00:00:45.814 [Pipeline] } 00:00:45.836 [Pipeline] // stage 00:00:45.847 [Pipeline] stage 00:00:45.849 [Pipeline] { (Prepare) 00:00:45.872 [Pipeline] writeFile 00:00:45.894 [Pipeline] sh 00:00:46.184 + logger -p user.info -t JENKINS-CI 00:00:46.209 [Pipeline] sh 00:00:46.494 + logger -p user.info -t JENKINS-CI 00:00:46.507 [Pipeline] sh 00:00:46.789 + cat autorun-spdk.conf 00:00:46.789 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:46.789 SPDK_TEST_NVMF=1 00:00:46.789 SPDK_TEST_NVME_CLI=1 00:00:46.789 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:46.789 SPDK_TEST_NVMF_NICS=e810 00:00:46.789 SPDK_TEST_VFIOUSER=1 00:00:46.789 SPDK_RUN_UBSAN=1 00:00:46.789 NET_TYPE=phy 00:00:46.797 RUN_NIGHTLY=0 00:00:46.802 [Pipeline] readFile 00:00:46.833 [Pipeline] withEnv 00:00:46.835 [Pipeline] { 00:00:46.853 [Pipeline] sh 00:00:47.142 + set -ex 00:00:47.142 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:47.142 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:47.142 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:47.142 ++ SPDK_TEST_NVMF=1 00:00:47.142 ++ SPDK_TEST_NVME_CLI=1 00:00:47.142 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:47.142 ++ SPDK_TEST_NVMF_NICS=e810 00:00:47.142 ++ SPDK_TEST_VFIOUSER=1 00:00:47.142 ++ SPDK_RUN_UBSAN=1 00:00:47.142 ++ NET_TYPE=phy 00:00:47.142 ++ RUN_NIGHTLY=0 00:00:47.142 + case $SPDK_TEST_NVMF_NICS in 00:00:47.142 + DRIVERS=ice 00:00:47.142 + [[ tcp == \r\d\m\a ]] 00:00:47.142 + [[ -n ice ]] 00:00:47.142 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:47.142 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:47.142 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:47.142 rmmod: ERROR: Module irdma is not currently loaded 00:00:47.142 rmmod: ERROR: Module i40iw is not currently loaded 00:00:47.142 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:47.142 + true 00:00:47.142 + for D in $DRIVERS 00:00:47.142 + sudo modprobe ice 00:00:47.142 + exit 0 00:00:47.152 [Pipeline] } 00:00:47.169 [Pipeline] // withEnv 00:00:47.173 [Pipeline] } 00:00:47.190 [Pipeline] // stage 00:00:47.203 [Pipeline] catchError 00:00:47.205 [Pipeline] { 00:00:47.223 [Pipeline] timeout 00:00:47.223 Timeout set to expire in 50 min 00:00:47.225 [Pipeline] { 00:00:47.242 [Pipeline] stage 00:00:47.243 [Pipeline] { (Tests) 00:00:47.258 [Pipeline] sh 00:00:47.544 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:47.544 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:47.544 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:47.544 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:47.544 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:47.544 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:47.544 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:47.544 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:47.544 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:47.544 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:47.544 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:47.544 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:47.544 + source /etc/os-release 00:00:47.544 ++ NAME='Fedora Linux' 00:00:47.544 ++ VERSION='38 (Cloud Edition)' 00:00:47.544 ++ ID=fedora 00:00:47.544 ++ VERSION_ID=38 00:00:47.544 ++ VERSION_CODENAME= 00:00:47.544 ++ PLATFORM_ID=platform:f38 00:00:47.544 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:47.544 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:47.544 ++ LOGO=fedora-logo-icon 00:00:47.544 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:47.544 ++ HOME_URL=https://fedoraproject.org/ 00:00:47.544 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:47.544 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:47.544 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:47.544 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:47.544 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:47.544 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:47.544 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:47.544 ++ SUPPORT_END=2024-05-14 00:00:47.544 ++ VARIANT='Cloud Edition' 00:00:47.544 ++ VARIANT_ID=cloud 00:00:47.544 + uname -a 00:00:47.544 Linux spdk-cyp-09 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:47.544 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:50.930 Hugepages 00:00:50.930 node hugesize free / total 00:00:50.930 node0 1048576kB 0 / 0 00:00:50.930 node0 2048kB 0 / 0 00:00:50.930 node1 1048576kB 0 / 0 00:00:50.930 node1 2048kB 0 / 0 00:00:50.930 00:00:50.930 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:50.930 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:00:50.930 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:00:50.930 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:00:50.930 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:00:50.930 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:00:50.930 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:00:50.930 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:00:50.930 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:00:50.930 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:00:50.930 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:00:50.930 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:00:50.930 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:00:50.930 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:00:50.930 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:00:50.930 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:00:50.930 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:00:50.930 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:00:50.930 + rm -f /tmp/spdk-ld-path 00:00:50.930 + source autorun-spdk.conf 00:00:50.930 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.930 ++ SPDK_TEST_NVMF=1 00:00:50.930 ++ SPDK_TEST_NVME_CLI=1 00:00:50.930 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:50.930 ++ SPDK_TEST_NVMF_NICS=e810 00:00:50.930 ++ SPDK_TEST_VFIOUSER=1 00:00:50.930 ++ SPDK_RUN_UBSAN=1 00:00:50.930 ++ NET_TYPE=phy 00:00:50.930 ++ RUN_NIGHTLY=0 00:00:50.930 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:50.930 + [[ -n '' ]] 00:00:50.930 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:50.930 + for M in /var/spdk/build-*-manifest.txt 00:00:50.930 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:50.930 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:50.930 + for M in /var/spdk/build-*-manifest.txt 00:00:50.930 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:50.930 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:50.930 ++ uname 00:00:50.930 + [[ Linux == \L\i\n\u\x ]] 00:00:50.930 + sudo dmesg -T 00:00:50.930 + sudo dmesg --clear 00:00:50.930 + dmesg_pid=1753691 00:00:50.930 + [[ Fedora Linux == FreeBSD ]] 00:00:50.930 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:50.930 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:50.930 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:50.930 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:50.930 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:50.930 + [[ -x /usr/src/fio-static/fio ]] 00:00:50.930 + export FIO_BIN=/usr/src/fio-static/fio 00:00:50.930 + FIO_BIN=/usr/src/fio-static/fio 00:00:50.930 + sudo dmesg -Tw 00:00:50.930 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:50.930 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:50.930 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:50.930 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:50.930 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:50.930 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:50.930 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:50.930 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:50.930 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:50.930 Test configuration: 00:00:50.930 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.930 SPDK_TEST_NVMF=1 00:00:50.930 SPDK_TEST_NVME_CLI=1 00:00:50.930 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:50.930 SPDK_TEST_NVMF_NICS=e810 00:00:50.930 SPDK_TEST_VFIOUSER=1 00:00:50.930 SPDK_RUN_UBSAN=1 00:00:50.930 NET_TYPE=phy 00:00:50.930 RUN_NIGHTLY=0 10:39:07 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:50.930 10:39:07 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:50.930 10:39:07 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:50.930 10:39:07 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:50.930 10:39:07 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:50.930 10:39:07 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:50.930 10:39:07 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:50.930 10:39:07 -- paths/export.sh@5 -- $ export PATH 00:00:50.930 10:39:07 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:50.930 10:39:07 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:50.930 10:39:07 -- common/autobuild_common.sh@444 -- $ date +%s 00:00:50.930 10:39:07 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720773547.XXXXXX 00:00:50.930 10:39:07 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720773547.DblGBb 00:00:50.930 10:39:07 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:00:50.930 10:39:07 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:00:50.930 10:39:07 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:50.930 10:39:07 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:50.930 10:39:07 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:50.930 10:39:07 -- common/autobuild_common.sh@460 -- $ get_config_params 00:00:50.930 10:39:07 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:50.930 10:39:07 -- common/autotest_common.sh@10 -- $ set +x 00:00:50.930 10:39:07 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:50.930 10:39:07 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:00:50.930 10:39:07 -- pm/common@17 -- $ local monitor 00:00:50.930 10:39:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:50.931 10:39:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:50.931 10:39:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:50.931 10:39:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:50.931 10:39:07 -- pm/common@21 -- $ date +%s 00:00:50.931 10:39:07 -- pm/common@25 -- $ sleep 1 00:00:50.931 10:39:07 -- pm/common@21 -- $ date +%s 00:00:50.931 10:39:07 -- pm/common@21 -- $ date +%s 00:00:50.931 10:39:07 -- pm/common@21 -- $ date +%s 00:00:50.931 10:39:07 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720773547 00:00:50.931 10:39:07 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720773547 00:00:50.931 10:39:07 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720773547 00:00:50.931 10:39:07 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720773547 00:00:50.931 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720773547_collect-vmstat.pm.log 00:00:50.931 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720773547_collect-cpu-load.pm.log 00:00:50.931 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720773547_collect-cpu-temp.pm.log 00:00:50.931 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720773547_collect-bmc-pm.bmc.pm.log 00:00:51.873 10:39:08 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:00:51.873 10:39:08 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:51.873 10:39:08 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:51.873 10:39:08 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:51.873 10:39:08 -- spdk/autobuild.sh@16 -- $ date -u 00:00:51.873 Fri Jul 12 08:39:08 AM UTC 2024 00:00:51.873 10:39:08 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:51.873 v24.09-pre-202-g719d03c6a 00:00:51.873 10:39:08 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:51.873 10:39:08 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:51.873 10:39:08 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:51.873 10:39:08 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:51.873 10:39:08 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:51.873 10:39:08 -- common/autotest_common.sh@10 -- $ set +x 00:00:52.135 ************************************ 00:00:52.135 START TEST ubsan 00:00:52.135 ************************************ 00:00:52.135 10:39:08 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:52.135 using ubsan 00:00:52.135 00:00:52.135 real 0m0.000s 00:00:52.135 user 0m0.000s 00:00:52.135 sys 0m0.000s 00:00:52.135 10:39:08 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:52.135 10:39:08 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:52.135 ************************************ 00:00:52.135 END TEST ubsan 00:00:52.135 ************************************ 00:00:52.135 10:39:08 -- common/autotest_common.sh@1142 -- $ return 0 00:00:52.135 10:39:08 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:52.135 10:39:08 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:52.135 10:39:08 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:52.135 10:39:08 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:52.135 10:39:08 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:52.135 10:39:08 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:52.135 10:39:08 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:52.135 10:39:08 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:52.135 10:39:08 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:52.135 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:52.135 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:52.707 Using 'verbs' RDMA provider 00:01:08.571 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:20.814 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:20.814 Creating mk/config.mk...done. 00:01:20.814 Creating mk/cc.flags.mk...done. 00:01:20.814 Type 'make' to build. 00:01:20.814 10:39:37 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:20.814 10:39:37 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:20.814 10:39:37 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:20.814 10:39:37 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.074 ************************************ 00:01:21.074 START TEST make 00:01:21.074 ************************************ 00:01:21.074 10:39:37 make -- common/autotest_common.sh@1123 -- $ make -j144 00:01:21.335 make[1]: Nothing to be done for 'all'. 00:01:22.727 The Meson build system 00:01:22.727 Version: 1.3.1 00:01:22.727 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:22.727 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:22.727 Build type: native build 00:01:22.727 Project name: libvfio-user 00:01:22.727 Project version: 0.0.1 00:01:22.727 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:22.727 C linker for the host machine: cc ld.bfd 2.39-16 00:01:22.727 Host machine cpu family: x86_64 00:01:22.727 Host machine cpu: x86_64 00:01:22.727 Run-time dependency threads found: YES 00:01:22.727 Library dl found: YES 00:01:22.727 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:22.727 Run-time dependency json-c found: YES 0.17 00:01:22.727 Run-time dependency cmocka found: YES 1.1.7 00:01:22.727 Program pytest-3 found: NO 00:01:22.727 Program flake8 found: NO 00:01:22.727 Program misspell-fixer found: NO 00:01:22.727 Program restructuredtext-lint found: NO 00:01:22.727 Program valgrind found: YES (/usr/bin/valgrind) 00:01:22.727 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:22.727 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:22.727 Compiler for C supports arguments -Wwrite-strings: YES 00:01:22.727 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:22.727 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:22.727 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:22.727 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:22.727 Build targets in project: 8 00:01:22.727 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:22.727 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:22.727 00:01:22.727 libvfio-user 0.0.1 00:01:22.727 00:01:22.727 User defined options 00:01:22.727 buildtype : debug 00:01:22.727 default_library: shared 00:01:22.727 libdir : /usr/local/lib 00:01:22.727 00:01:22.727 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:23.298 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:23.299 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:23.299 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:23.299 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:23.299 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:23.299 [5/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:23.299 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:23.299 [7/37] Compiling C object samples/null.p/null.c.o 00:01:23.299 [8/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:23.299 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:23.299 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:23.299 [11/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:23.299 [12/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:23.299 [13/37] Compiling C object samples/server.p/server.c.o 00:01:23.299 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:23.299 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:23.299 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:23.299 [17/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:23.299 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:23.299 [19/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:23.299 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:23.299 [21/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:23.299 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:23.299 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:23.299 [24/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:23.299 [25/37] Compiling C object samples/client.p/client.c.o 00:01:23.299 [26/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:23.299 [27/37] Linking target samples/client 00:01:23.559 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:23.559 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:23.559 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:23.559 [31/37] Linking target test/unit_tests 00:01:23.559 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:23.559 [33/37] Linking target samples/server 00:01:23.821 [34/37] Linking target samples/null 00:01:23.821 [35/37] Linking target samples/lspci 00:01:23.821 [36/37] Linking target samples/gpio-pci-idio-16 00:01:23.821 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:23.821 INFO: autodetecting backend as ninja 00:01:23.821 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:23.821 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:24.083 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:24.083 ninja: no work to do. 00:01:30.682 The Meson build system 00:01:30.682 Version: 1.3.1 00:01:30.682 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:30.682 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:30.682 Build type: native build 00:01:30.682 Program cat found: YES (/usr/bin/cat) 00:01:30.682 Project name: DPDK 00:01:30.682 Project version: 24.03.0 00:01:30.682 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:30.682 C linker for the host machine: cc ld.bfd 2.39-16 00:01:30.682 Host machine cpu family: x86_64 00:01:30.682 Host machine cpu: x86_64 00:01:30.682 Message: ## Building in Developer Mode ## 00:01:30.682 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:30.682 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:30.682 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:30.682 Program python3 found: YES (/usr/bin/python3) 00:01:30.682 Program cat found: YES (/usr/bin/cat) 00:01:30.682 Compiler for C supports arguments -march=native: YES 00:01:30.682 Checking for size of "void *" : 8 00:01:30.682 Checking for size of "void *" : 8 (cached) 00:01:30.682 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:30.682 Library m found: YES 00:01:30.682 Library numa found: YES 00:01:30.682 Has header "numaif.h" : YES 00:01:30.682 Library fdt found: NO 00:01:30.682 Library execinfo found: NO 00:01:30.682 Has header "execinfo.h" : YES 00:01:30.682 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:30.682 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:30.682 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:30.682 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:30.682 Run-time dependency openssl found: YES 3.0.9 00:01:30.682 Run-time dependency libpcap found: YES 1.10.4 00:01:30.682 Has header "pcap.h" with dependency libpcap: YES 00:01:30.682 Compiler for C supports arguments -Wcast-qual: YES 00:01:30.682 Compiler for C supports arguments -Wdeprecated: YES 00:01:30.682 Compiler for C supports arguments -Wformat: YES 00:01:30.682 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:30.682 Compiler for C supports arguments -Wformat-security: NO 00:01:30.682 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:30.682 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:30.682 Compiler for C supports arguments -Wnested-externs: YES 00:01:30.682 Compiler for C supports arguments -Wold-style-definition: YES 00:01:30.682 Compiler for C supports arguments -Wpointer-arith: YES 00:01:30.682 Compiler for C supports arguments -Wsign-compare: YES 00:01:30.682 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:30.682 Compiler for C supports arguments -Wundef: YES 00:01:30.682 Compiler for C supports arguments -Wwrite-strings: YES 00:01:30.682 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:30.682 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:30.682 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:30.682 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:30.682 Program objdump found: YES (/usr/bin/objdump) 00:01:30.682 Compiler for C supports arguments -mavx512f: YES 00:01:30.682 Checking if "AVX512 checking" compiles: YES 00:01:30.682 Fetching value of define "__SSE4_2__" : 1 00:01:30.682 Fetching value of define "__AES__" : 1 00:01:30.682 Fetching value of define "__AVX__" : 1 00:01:30.682 Fetching value of define "__AVX2__" : 1 00:01:30.682 Fetching value of define "__AVX512BW__" : 1 00:01:30.682 Fetching value of define "__AVX512CD__" : 1 00:01:30.682 Fetching value of define "__AVX512DQ__" : 1 00:01:30.682 Fetching value of define "__AVX512F__" : 1 00:01:30.682 Fetching value of define "__AVX512VL__" : 1 00:01:30.682 Fetching value of define "__PCLMUL__" : 1 00:01:30.682 Fetching value of define "__RDRND__" : 1 00:01:30.682 Fetching value of define "__RDSEED__" : 1 00:01:30.682 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:30.682 Fetching value of define "__znver1__" : (undefined) 00:01:30.682 Fetching value of define "__znver2__" : (undefined) 00:01:30.682 Fetching value of define "__znver3__" : (undefined) 00:01:30.682 Fetching value of define "__znver4__" : (undefined) 00:01:30.682 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:30.682 Message: lib/log: Defining dependency "log" 00:01:30.682 Message: lib/kvargs: Defining dependency "kvargs" 00:01:30.682 Message: lib/telemetry: Defining dependency "telemetry" 00:01:30.682 Checking for function "getentropy" : NO 00:01:30.682 Message: lib/eal: Defining dependency "eal" 00:01:30.682 Message: lib/ring: Defining dependency "ring" 00:01:30.682 Message: lib/rcu: Defining dependency "rcu" 00:01:30.682 Message: lib/mempool: Defining dependency "mempool" 00:01:30.682 Message: lib/mbuf: Defining dependency "mbuf" 00:01:30.682 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:30.682 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:30.682 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:30.682 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:30.682 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:30.682 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:30.682 Compiler for C supports arguments -mpclmul: YES 00:01:30.682 Compiler for C supports arguments -maes: YES 00:01:30.682 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:30.682 Compiler for C supports arguments -mavx512bw: YES 00:01:30.682 Compiler for C supports arguments -mavx512dq: YES 00:01:30.682 Compiler for C supports arguments -mavx512vl: YES 00:01:30.682 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:30.682 Compiler for C supports arguments -mavx2: YES 00:01:30.682 Compiler for C supports arguments -mavx: YES 00:01:30.682 Message: lib/net: Defining dependency "net" 00:01:30.682 Message: lib/meter: Defining dependency "meter" 00:01:30.682 Message: lib/ethdev: Defining dependency "ethdev" 00:01:30.682 Message: lib/pci: Defining dependency "pci" 00:01:30.682 Message: lib/cmdline: Defining dependency "cmdline" 00:01:30.682 Message: lib/hash: Defining dependency "hash" 00:01:30.682 Message: lib/timer: Defining dependency "timer" 00:01:30.682 Message: lib/compressdev: Defining dependency "compressdev" 00:01:30.682 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:30.682 Message: lib/dmadev: Defining dependency "dmadev" 00:01:30.682 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:30.682 Message: lib/power: Defining dependency "power" 00:01:30.682 Message: lib/reorder: Defining dependency "reorder" 00:01:30.682 Message: lib/security: Defining dependency "security" 00:01:30.682 Has header "linux/userfaultfd.h" : YES 00:01:30.682 Has header "linux/vduse.h" : YES 00:01:30.682 Message: lib/vhost: Defining dependency "vhost" 00:01:30.682 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:30.682 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:30.682 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:30.682 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:30.682 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:30.682 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:30.682 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:30.682 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:30.682 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:30.682 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:30.682 Program doxygen found: YES (/usr/bin/doxygen) 00:01:30.682 Configuring doxy-api-html.conf using configuration 00:01:30.682 Configuring doxy-api-man.conf using configuration 00:01:30.682 Program mandb found: YES (/usr/bin/mandb) 00:01:30.682 Program sphinx-build found: NO 00:01:30.682 Configuring rte_build_config.h using configuration 00:01:30.682 Message: 00:01:30.682 ================= 00:01:30.682 Applications Enabled 00:01:30.682 ================= 00:01:30.682 00:01:30.682 apps: 00:01:30.682 00:01:30.682 00:01:30.682 Message: 00:01:30.682 ================= 00:01:30.682 Libraries Enabled 00:01:30.682 ================= 00:01:30.682 00:01:30.682 libs: 00:01:30.682 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:30.682 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:30.682 cryptodev, dmadev, power, reorder, security, vhost, 00:01:30.682 00:01:30.682 Message: 00:01:30.682 =============== 00:01:30.682 Drivers Enabled 00:01:30.682 =============== 00:01:30.682 00:01:30.682 common: 00:01:30.682 00:01:30.682 bus: 00:01:30.682 pci, vdev, 00:01:30.682 mempool: 00:01:30.682 ring, 00:01:30.682 dma: 00:01:30.682 00:01:30.682 net: 00:01:30.682 00:01:30.682 crypto: 00:01:30.682 00:01:30.682 compress: 00:01:30.682 00:01:30.682 vdpa: 00:01:30.682 00:01:30.682 00:01:30.682 Message: 00:01:30.682 ================= 00:01:30.682 Content Skipped 00:01:30.682 ================= 00:01:30.682 00:01:30.682 apps: 00:01:30.682 dumpcap: explicitly disabled via build config 00:01:30.682 graph: explicitly disabled via build config 00:01:30.682 pdump: explicitly disabled via build config 00:01:30.682 proc-info: explicitly disabled via build config 00:01:30.682 test-acl: explicitly disabled via build config 00:01:30.683 test-bbdev: explicitly disabled via build config 00:01:30.683 test-cmdline: explicitly disabled via build config 00:01:30.683 test-compress-perf: explicitly disabled via build config 00:01:30.683 test-crypto-perf: explicitly disabled via build config 00:01:30.683 test-dma-perf: explicitly disabled via build config 00:01:30.683 test-eventdev: explicitly disabled via build config 00:01:30.683 test-fib: explicitly disabled via build config 00:01:30.683 test-flow-perf: explicitly disabled via build config 00:01:30.683 test-gpudev: explicitly disabled via build config 00:01:30.683 test-mldev: explicitly disabled via build config 00:01:30.683 test-pipeline: explicitly disabled via build config 00:01:30.683 test-pmd: explicitly disabled via build config 00:01:30.683 test-regex: explicitly disabled via build config 00:01:30.683 test-sad: explicitly disabled via build config 00:01:30.683 test-security-perf: explicitly disabled via build config 00:01:30.683 00:01:30.683 libs: 00:01:30.683 argparse: explicitly disabled via build config 00:01:30.683 metrics: explicitly disabled via build config 00:01:30.683 acl: explicitly disabled via build config 00:01:30.683 bbdev: explicitly disabled via build config 00:01:30.683 bitratestats: explicitly disabled via build config 00:01:30.683 bpf: explicitly disabled via build config 00:01:30.683 cfgfile: explicitly disabled via build config 00:01:30.683 distributor: explicitly disabled via build config 00:01:30.683 efd: explicitly disabled via build config 00:01:30.683 eventdev: explicitly disabled via build config 00:01:30.683 dispatcher: explicitly disabled via build config 00:01:30.683 gpudev: explicitly disabled via build config 00:01:30.683 gro: explicitly disabled via build config 00:01:30.683 gso: explicitly disabled via build config 00:01:30.683 ip_frag: explicitly disabled via build config 00:01:30.683 jobstats: explicitly disabled via build config 00:01:30.683 latencystats: explicitly disabled via build config 00:01:30.683 lpm: explicitly disabled via build config 00:01:30.683 member: explicitly disabled via build config 00:01:30.683 pcapng: explicitly disabled via build config 00:01:30.683 rawdev: explicitly disabled via build config 00:01:30.683 regexdev: explicitly disabled via build config 00:01:30.683 mldev: explicitly disabled via build config 00:01:30.683 rib: explicitly disabled via build config 00:01:30.683 sched: explicitly disabled via build config 00:01:30.683 stack: explicitly disabled via build config 00:01:30.683 ipsec: explicitly disabled via build config 00:01:30.683 pdcp: explicitly disabled via build config 00:01:30.683 fib: explicitly disabled via build config 00:01:30.683 port: explicitly disabled via build config 00:01:30.683 pdump: explicitly disabled via build config 00:01:30.683 table: explicitly disabled via build config 00:01:30.683 pipeline: explicitly disabled via build config 00:01:30.683 graph: explicitly disabled via build config 00:01:30.683 node: explicitly disabled via build config 00:01:30.683 00:01:30.683 drivers: 00:01:30.683 common/cpt: not in enabled drivers build config 00:01:30.683 common/dpaax: not in enabled drivers build config 00:01:30.683 common/iavf: not in enabled drivers build config 00:01:30.683 common/idpf: not in enabled drivers build config 00:01:30.683 common/ionic: not in enabled drivers build config 00:01:30.683 common/mvep: not in enabled drivers build config 00:01:30.683 common/octeontx: not in enabled drivers build config 00:01:30.683 bus/auxiliary: not in enabled drivers build config 00:01:30.683 bus/cdx: not in enabled drivers build config 00:01:30.683 bus/dpaa: not in enabled drivers build config 00:01:30.683 bus/fslmc: not in enabled drivers build config 00:01:30.683 bus/ifpga: not in enabled drivers build config 00:01:30.683 bus/platform: not in enabled drivers build config 00:01:30.683 bus/uacce: not in enabled drivers build config 00:01:30.683 bus/vmbus: not in enabled drivers build config 00:01:30.683 common/cnxk: not in enabled drivers build config 00:01:30.683 common/mlx5: not in enabled drivers build config 00:01:30.683 common/nfp: not in enabled drivers build config 00:01:30.683 common/nitrox: not in enabled drivers build config 00:01:30.683 common/qat: not in enabled drivers build config 00:01:30.683 common/sfc_efx: not in enabled drivers build config 00:01:30.683 mempool/bucket: not in enabled drivers build config 00:01:30.683 mempool/cnxk: not in enabled drivers build config 00:01:30.683 mempool/dpaa: not in enabled drivers build config 00:01:30.683 mempool/dpaa2: not in enabled drivers build config 00:01:30.683 mempool/octeontx: not in enabled drivers build config 00:01:30.683 mempool/stack: not in enabled drivers build config 00:01:30.683 dma/cnxk: not in enabled drivers build config 00:01:30.683 dma/dpaa: not in enabled drivers build config 00:01:30.683 dma/dpaa2: not in enabled drivers build config 00:01:30.683 dma/hisilicon: not in enabled drivers build config 00:01:30.683 dma/idxd: not in enabled drivers build config 00:01:30.683 dma/ioat: not in enabled drivers build config 00:01:30.683 dma/skeleton: not in enabled drivers build config 00:01:30.683 net/af_packet: not in enabled drivers build config 00:01:30.683 net/af_xdp: not in enabled drivers build config 00:01:30.683 net/ark: not in enabled drivers build config 00:01:30.683 net/atlantic: not in enabled drivers build config 00:01:30.683 net/avp: not in enabled drivers build config 00:01:30.683 net/axgbe: not in enabled drivers build config 00:01:30.683 net/bnx2x: not in enabled drivers build config 00:01:30.683 net/bnxt: not in enabled drivers build config 00:01:30.683 net/bonding: not in enabled drivers build config 00:01:30.683 net/cnxk: not in enabled drivers build config 00:01:30.683 net/cpfl: not in enabled drivers build config 00:01:30.683 net/cxgbe: not in enabled drivers build config 00:01:30.683 net/dpaa: not in enabled drivers build config 00:01:30.683 net/dpaa2: not in enabled drivers build config 00:01:30.683 net/e1000: not in enabled drivers build config 00:01:30.683 net/ena: not in enabled drivers build config 00:01:30.683 net/enetc: not in enabled drivers build config 00:01:30.683 net/enetfec: not in enabled drivers build config 00:01:30.683 net/enic: not in enabled drivers build config 00:01:30.683 net/failsafe: not in enabled drivers build config 00:01:30.683 net/fm10k: not in enabled drivers build config 00:01:30.683 net/gve: not in enabled drivers build config 00:01:30.683 net/hinic: not in enabled drivers build config 00:01:30.683 net/hns3: not in enabled drivers build config 00:01:30.683 net/i40e: not in enabled drivers build config 00:01:30.683 net/iavf: not in enabled drivers build config 00:01:30.683 net/ice: not in enabled drivers build config 00:01:30.683 net/idpf: not in enabled drivers build config 00:01:30.683 net/igc: not in enabled drivers build config 00:01:30.683 net/ionic: not in enabled drivers build config 00:01:30.683 net/ipn3ke: not in enabled drivers build config 00:01:30.683 net/ixgbe: not in enabled drivers build config 00:01:30.683 net/mana: not in enabled drivers build config 00:01:30.683 net/memif: not in enabled drivers build config 00:01:30.683 net/mlx4: not in enabled drivers build config 00:01:30.683 net/mlx5: not in enabled drivers build config 00:01:30.683 net/mvneta: not in enabled drivers build config 00:01:30.683 net/mvpp2: not in enabled drivers build config 00:01:30.683 net/netvsc: not in enabled drivers build config 00:01:30.683 net/nfb: not in enabled drivers build config 00:01:30.683 net/nfp: not in enabled drivers build config 00:01:30.683 net/ngbe: not in enabled drivers build config 00:01:30.683 net/null: not in enabled drivers build config 00:01:30.683 net/octeontx: not in enabled drivers build config 00:01:30.683 net/octeon_ep: not in enabled drivers build config 00:01:30.683 net/pcap: not in enabled drivers build config 00:01:30.683 net/pfe: not in enabled drivers build config 00:01:30.683 net/qede: not in enabled drivers build config 00:01:30.683 net/ring: not in enabled drivers build config 00:01:30.683 net/sfc: not in enabled drivers build config 00:01:30.683 net/softnic: not in enabled drivers build config 00:01:30.683 net/tap: not in enabled drivers build config 00:01:30.683 net/thunderx: not in enabled drivers build config 00:01:30.683 net/txgbe: not in enabled drivers build config 00:01:30.683 net/vdev_netvsc: not in enabled drivers build config 00:01:30.683 net/vhost: not in enabled drivers build config 00:01:30.683 net/virtio: not in enabled drivers build config 00:01:30.683 net/vmxnet3: not in enabled drivers build config 00:01:30.683 raw/*: missing internal dependency, "rawdev" 00:01:30.683 crypto/armv8: not in enabled drivers build config 00:01:30.683 crypto/bcmfs: not in enabled drivers build config 00:01:30.683 crypto/caam_jr: not in enabled drivers build config 00:01:30.683 crypto/ccp: not in enabled drivers build config 00:01:30.683 crypto/cnxk: not in enabled drivers build config 00:01:30.683 crypto/dpaa_sec: not in enabled drivers build config 00:01:30.683 crypto/dpaa2_sec: not in enabled drivers build config 00:01:30.683 crypto/ipsec_mb: not in enabled drivers build config 00:01:30.683 crypto/mlx5: not in enabled drivers build config 00:01:30.683 crypto/mvsam: not in enabled drivers build config 00:01:30.684 crypto/nitrox: not in enabled drivers build config 00:01:30.684 crypto/null: not in enabled drivers build config 00:01:30.684 crypto/octeontx: not in enabled drivers build config 00:01:30.684 crypto/openssl: not in enabled drivers build config 00:01:30.684 crypto/scheduler: not in enabled drivers build config 00:01:30.684 crypto/uadk: not in enabled drivers build config 00:01:30.684 crypto/virtio: not in enabled drivers build config 00:01:30.684 compress/isal: not in enabled drivers build config 00:01:30.684 compress/mlx5: not in enabled drivers build config 00:01:30.684 compress/nitrox: not in enabled drivers build config 00:01:30.684 compress/octeontx: not in enabled drivers build config 00:01:30.684 compress/zlib: not in enabled drivers build config 00:01:30.684 regex/*: missing internal dependency, "regexdev" 00:01:30.684 ml/*: missing internal dependency, "mldev" 00:01:30.684 vdpa/ifc: not in enabled drivers build config 00:01:30.684 vdpa/mlx5: not in enabled drivers build config 00:01:30.684 vdpa/nfp: not in enabled drivers build config 00:01:30.684 vdpa/sfc: not in enabled drivers build config 00:01:30.684 event/*: missing internal dependency, "eventdev" 00:01:30.684 baseband/*: missing internal dependency, "bbdev" 00:01:30.684 gpu/*: missing internal dependency, "gpudev" 00:01:30.684 00:01:30.684 00:01:30.684 Build targets in project: 84 00:01:30.684 00:01:30.684 DPDK 24.03.0 00:01:30.684 00:01:30.684 User defined options 00:01:30.684 buildtype : debug 00:01:30.684 default_library : shared 00:01:30.684 libdir : lib 00:01:30.684 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:30.684 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:30.684 c_link_args : 00:01:30.684 cpu_instruction_set: native 00:01:30.684 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:30.684 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:30.684 enable_docs : false 00:01:30.684 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:30.684 enable_kmods : false 00:01:30.684 max_lcores : 128 00:01:30.684 tests : false 00:01:30.684 00:01:30.684 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:30.684 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:30.684 [1/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:30.684 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:30.684 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:30.684 [4/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:30.684 [5/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:30.684 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:30.684 [7/267] Linking static target lib/librte_kvargs.a 00:01:30.684 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:30.684 [9/267] Linking static target lib/librte_log.a 00:01:30.684 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:30.684 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:30.684 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:30.684 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:30.684 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:30.684 [15/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:30.944 [16/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:30.944 [17/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:30.944 [18/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:30.944 [19/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:30.944 [20/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:30.944 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:30.944 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:30.944 [23/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:30.944 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:30.944 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:30.944 [26/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:30.944 [27/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:30.944 [28/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:30.944 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:30.944 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:30.944 [31/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:30.944 [32/267] Linking static target lib/librte_pci.a 00:01:30.944 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:30.944 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:30.944 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:30.944 [36/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:31.203 [37/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:31.203 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:31.203 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:31.203 [40/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.203 [41/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:31.203 [42/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:31.203 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:31.203 [44/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:31.203 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:31.203 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:31.203 [47/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.203 [48/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:31.203 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:31.203 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:31.203 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:31.203 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:31.203 [53/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:31.203 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:31.203 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:31.203 [56/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:31.203 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:31.203 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:31.203 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:31.203 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:31.203 [61/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:31.203 [62/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:31.203 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:31.203 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:31.203 [65/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:31.203 [66/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:31.203 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:31.203 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:31.203 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:31.489 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:31.489 [71/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:31.489 [72/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:31.489 [73/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:31.489 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:31.489 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:31.489 [76/267] Linking static target lib/librte_telemetry.a 00:01:31.489 [77/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:31.489 [78/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:31.489 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:31.489 [80/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:31.489 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:31.489 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:31.489 [83/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:31.489 [84/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:31.489 [85/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:31.489 [86/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:31.489 [87/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:31.489 [88/267] Linking static target lib/librte_meter.a 00:01:31.489 [89/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:31.489 [90/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:31.489 [91/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:31.489 [92/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:31.489 [93/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:31.489 [94/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:31.489 [95/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:31.489 [96/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:31.489 [97/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:31.489 [98/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:31.490 [99/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:31.490 [100/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:31.490 [101/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:31.490 [102/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:31.490 [103/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:31.490 [104/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:31.490 [105/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:31.490 [106/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:31.490 [107/267] Linking static target lib/librte_ring.a 00:01:31.490 [108/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:31.490 [109/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:31.490 [110/267] Linking static target lib/librte_rcu.a 00:01:31.490 [111/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:31.490 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:31.490 [113/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:31.490 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:31.490 [115/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:31.490 [116/267] Linking static target lib/librte_security.a 00:01:31.490 [117/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:31.490 [118/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:31.490 [119/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:31.490 [120/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:31.490 [121/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:31.490 [122/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:31.490 [123/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:31.490 [124/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:31.490 [125/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:31.490 [126/267] Linking static target lib/librte_cmdline.a 00:01:31.490 [127/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:31.490 [128/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:31.490 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:31.490 [130/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.490 [131/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:31.490 [132/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:31.490 [133/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:31.490 [134/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:31.490 [135/267] Linking static target lib/librte_timer.a 00:01:31.490 [136/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:31.490 [137/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:31.490 [138/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:31.490 [139/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:31.490 [140/267] Linking target lib/librte_log.so.24.1 00:01:31.490 [141/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:31.490 [142/267] Linking static target lib/librte_dmadev.a 00:01:31.490 [143/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:31.490 [144/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:31.490 [145/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:31.490 [146/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:31.490 [147/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:31.490 [148/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:31.490 [149/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:31.490 [150/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:31.490 [151/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:31.490 [152/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:31.490 [153/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:31.490 [154/267] Linking static target lib/librte_net.a 00:01:31.490 [155/267] Linking static target lib/librte_reorder.a 00:01:31.490 [156/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:31.490 [157/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:31.490 [158/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:31.490 [159/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:31.490 [160/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:31.490 [161/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:31.490 [162/267] Linking static target lib/librte_mempool.a 00:01:31.490 [163/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:31.490 [164/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:31.490 [165/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:31.490 [166/267] Linking static target lib/librte_eal.a 00:01:31.490 [167/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:31.490 [168/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:31.490 [169/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:31.490 [170/267] Linking static target lib/librte_compressdev.a 00:01:31.490 [171/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:31.490 [172/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:31.490 [173/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:31.490 [174/267] Linking static target lib/librte_power.a 00:01:31.490 [175/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:31.490 [176/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:31.753 [177/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:31.753 [178/267] Linking static target lib/librte_mbuf.a 00:01:31.753 [179/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.753 [180/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:31.753 [181/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:31.753 [182/267] Linking target lib/librte_kvargs.so.24.1 00:01:31.753 [183/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:31.753 [184/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:31.753 [185/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:31.753 [186/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:31.753 [187/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:31.753 [188/267] Linking static target drivers/librte_bus_vdev.a 00:01:31.753 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:31.753 [190/267] Linking static target lib/librte_hash.a 00:01:31.753 [191/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:31.753 [192/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:31.753 [193/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.753 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:31.753 [195/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.753 [196/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:31.753 [197/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:31.753 [198/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:31.753 [199/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:31.753 [200/267] Linking static target drivers/librte_bus_pci.a 00:01:31.753 [201/267] Linking static target lib/librte_cryptodev.a 00:01:31.753 [202/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:31.753 [203/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:31.753 [204/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:31.753 [205/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:31.753 [206/267] Linking static target drivers/librte_mempool_ring.a 00:01:31.753 [207/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.014 [208/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.014 [209/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.014 [210/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:32.014 [211/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.014 [212/267] Linking target lib/librte_telemetry.so.24.1 00:01:32.014 [213/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.014 [214/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.275 [215/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:32.275 [216/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.275 [217/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:32.275 [218/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.537 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:32.537 [220/267] Linking static target lib/librte_ethdev.a 00:01:32.537 [221/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.537 [222/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.537 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.537 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.799 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.799 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.371 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:33.371 [228/267] Linking static target lib/librte_vhost.a 00:01:33.944 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.861 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.461 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.484 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.484 [233/267] Linking target lib/librte_eal.so.24.1 00:01:43.484 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:43.745 [235/267] Linking target lib/librte_ring.so.24.1 00:01:43.745 [236/267] Linking target lib/librte_timer.so.24.1 00:01:43.745 [237/267] Linking target lib/librte_meter.so.24.1 00:01:43.745 [238/267] Linking target lib/librte_pci.so.24.1 00:01:43.745 [239/267] Linking target drivers/librte_bus_vdev.so.24.1 00:01:43.745 [240/267] Linking target lib/librte_dmadev.so.24.1 00:01:43.745 [241/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:43.745 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:43.745 [243/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:43.745 [244/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:43.745 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:44.006 [246/267] Linking target lib/librte_rcu.so.24.1 00:01:44.006 [247/267] Linking target lib/librte_mempool.so.24.1 00:01:44.006 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:01:44.006 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:44.006 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:44.006 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:01:44.006 [252/267] Linking target lib/librte_mbuf.so.24.1 00:01:44.265 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:44.265 [254/267] Linking target lib/librte_compressdev.so.24.1 00:01:44.265 [255/267] Linking target lib/librte_net.so.24.1 00:01:44.265 [256/267] Linking target lib/librte_reorder.so.24.1 00:01:44.265 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:01:44.526 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:44.526 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:44.526 [260/267] Linking target lib/librte_hash.so.24.1 00:01:44.526 [261/267] Linking target lib/librte_cmdline.so.24.1 00:01:44.526 [262/267] Linking target lib/librte_security.so.24.1 00:01:44.526 [263/267] Linking target lib/librte_ethdev.so.24.1 00:01:44.526 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:44.526 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:44.787 [266/267] Linking target lib/librte_power.so.24.1 00:01:44.787 [267/267] Linking target lib/librte_vhost.so.24.1 00:01:44.787 INFO: autodetecting backend as ninja 00:01:44.787 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:01:45.727 CC lib/log/log.o 00:01:45.727 CC lib/log/log_flags.o 00:01:45.727 CC lib/log/log_deprecated.o 00:01:45.727 CC lib/ut_mock/mock.o 00:01:45.727 CC lib/ut/ut.o 00:01:45.987 LIB libspdk_log.a 00:01:45.987 LIB libspdk_ut.a 00:01:45.987 LIB libspdk_ut_mock.a 00:01:45.987 SO libspdk_log.so.7.0 00:01:45.987 SO libspdk_ut.so.2.0 00:01:45.987 SO libspdk_ut_mock.so.6.0 00:01:46.248 SYMLINK libspdk_ut.so 00:01:46.248 SYMLINK libspdk_log.so 00:01:46.248 SYMLINK libspdk_ut_mock.so 00:01:46.508 CC lib/util/base64.o 00:01:46.508 CC lib/util/bit_array.o 00:01:46.508 CC lib/dma/dma.o 00:01:46.508 CC lib/util/cpuset.o 00:01:46.508 CC lib/util/crc16.o 00:01:46.508 CC lib/util/crc32.o 00:01:46.508 CC lib/util/crc32c.o 00:01:46.508 CC lib/ioat/ioat.o 00:01:46.508 CC lib/util/crc32_ieee.o 00:01:46.508 CC lib/util/crc64.o 00:01:46.508 CC lib/util/dif.o 00:01:46.508 CXX lib/trace_parser/trace.o 00:01:46.508 CC lib/util/fd.o 00:01:46.508 CC lib/util/file.o 00:01:46.508 CC lib/util/hexlify.o 00:01:46.508 CC lib/util/iov.o 00:01:46.508 CC lib/util/math.o 00:01:46.508 CC lib/util/pipe.o 00:01:46.508 CC lib/util/strerror_tls.o 00:01:46.508 CC lib/util/string.o 00:01:46.508 CC lib/util/uuid.o 00:01:46.508 CC lib/util/fd_group.o 00:01:46.508 CC lib/util/xor.o 00:01:46.508 CC lib/util/zipf.o 00:01:46.767 CC lib/vfio_user/host/vfio_user_pci.o 00:01:46.767 CC lib/vfio_user/host/vfio_user.o 00:01:46.767 LIB libspdk_dma.a 00:01:46.767 SO libspdk_dma.so.4.0 00:01:46.767 LIB libspdk_ioat.a 00:01:46.767 SYMLINK libspdk_dma.so 00:01:46.767 SO libspdk_ioat.so.7.0 00:01:47.042 LIB libspdk_vfio_user.a 00:01:47.042 SYMLINK libspdk_ioat.so 00:01:47.042 SO libspdk_vfio_user.so.5.0 00:01:47.042 LIB libspdk_util.a 00:01:47.042 SYMLINK libspdk_vfio_user.so 00:01:47.042 SO libspdk_util.so.9.1 00:01:47.303 SYMLINK libspdk_util.so 00:01:47.303 LIB libspdk_trace_parser.a 00:01:47.303 SO libspdk_trace_parser.so.5.0 00:01:47.563 SYMLINK libspdk_trace_parser.so 00:01:47.563 CC lib/rdma_provider/common.o 00:01:47.563 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:47.563 CC lib/conf/conf.o 00:01:47.563 CC lib/json/json_parse.o 00:01:47.563 CC lib/json/json_util.o 00:01:47.563 CC lib/json/json_write.o 00:01:47.563 CC lib/rdma_utils/rdma_utils.o 00:01:47.563 CC lib/idxd/idxd.o 00:01:47.563 CC lib/idxd/idxd_user.o 00:01:47.563 CC lib/idxd/idxd_kernel.o 00:01:47.563 CC lib/vmd/vmd.o 00:01:47.563 CC lib/env_dpdk/env.o 00:01:47.563 CC lib/vmd/led.o 00:01:47.563 CC lib/env_dpdk/memory.o 00:01:47.563 CC lib/env_dpdk/pci.o 00:01:47.563 CC lib/env_dpdk/init.o 00:01:47.563 CC lib/env_dpdk/threads.o 00:01:47.563 CC lib/env_dpdk/pci_ioat.o 00:01:47.563 CC lib/env_dpdk/pci_virtio.o 00:01:47.563 CC lib/env_dpdk/pci_vmd.o 00:01:47.563 CC lib/env_dpdk/pci_idxd.o 00:01:47.563 CC lib/env_dpdk/pci_event.o 00:01:47.563 CC lib/env_dpdk/sigbus_handler.o 00:01:47.563 CC lib/env_dpdk/pci_dpdk.o 00:01:47.563 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:47.563 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:47.823 LIB libspdk_rdma_provider.a 00:01:47.823 LIB libspdk_conf.a 00:01:47.823 SO libspdk_rdma_provider.so.6.0 00:01:47.823 LIB libspdk_rdma_utils.a 00:01:47.823 SO libspdk_conf.so.6.0 00:01:47.823 LIB libspdk_json.a 00:01:47.823 SO libspdk_rdma_utils.so.1.0 00:01:48.082 SYMLINK libspdk_rdma_provider.so 00:01:48.082 SO libspdk_json.so.6.0 00:01:48.082 SYMLINK libspdk_conf.so 00:01:48.082 SYMLINK libspdk_rdma_utils.so 00:01:48.082 SYMLINK libspdk_json.so 00:01:48.082 LIB libspdk_idxd.a 00:01:48.082 SO libspdk_idxd.so.12.0 00:01:48.082 LIB libspdk_vmd.a 00:01:48.342 SO libspdk_vmd.so.6.0 00:01:48.342 SYMLINK libspdk_idxd.so 00:01:48.342 SYMLINK libspdk_vmd.so 00:01:48.342 CC lib/jsonrpc/jsonrpc_server.o 00:01:48.342 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:48.342 CC lib/jsonrpc/jsonrpc_client.o 00:01:48.342 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:48.601 LIB libspdk_jsonrpc.a 00:01:48.601 SO libspdk_jsonrpc.so.6.0 00:01:48.862 SYMLINK libspdk_jsonrpc.so 00:01:48.862 LIB libspdk_env_dpdk.a 00:01:48.862 SO libspdk_env_dpdk.so.14.1 00:01:49.123 SYMLINK libspdk_env_dpdk.so 00:01:49.123 CC lib/rpc/rpc.o 00:01:49.383 LIB libspdk_rpc.a 00:01:49.383 SO libspdk_rpc.so.6.0 00:01:49.383 SYMLINK libspdk_rpc.so 00:01:49.955 CC lib/trace/trace.o 00:01:49.955 CC lib/trace/trace_flags.o 00:01:49.955 CC lib/trace/trace_rpc.o 00:01:49.955 CC lib/notify/notify.o 00:01:49.955 CC lib/notify/notify_rpc.o 00:01:49.955 CC lib/keyring/keyring.o 00:01:49.955 CC lib/keyring/keyring_rpc.o 00:01:49.955 LIB libspdk_notify.a 00:01:49.955 LIB libspdk_keyring.a 00:01:49.955 SO libspdk_notify.so.6.0 00:01:49.955 LIB libspdk_trace.a 00:01:49.955 SO libspdk_keyring.so.1.0 00:01:50.215 SO libspdk_trace.so.10.0 00:01:50.215 SYMLINK libspdk_notify.so 00:01:50.215 SYMLINK libspdk_keyring.so 00:01:50.215 SYMLINK libspdk_trace.so 00:01:50.476 CC lib/thread/thread.o 00:01:50.476 CC lib/thread/iobuf.o 00:01:50.476 CC lib/sock/sock.o 00:01:50.476 CC lib/sock/sock_rpc.o 00:01:51.046 LIB libspdk_sock.a 00:01:51.047 SO libspdk_sock.so.10.0 00:01:51.047 SYMLINK libspdk_sock.so 00:01:51.307 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:51.307 CC lib/nvme/nvme_ctrlr.o 00:01:51.307 CC lib/nvme/nvme_fabric.o 00:01:51.307 CC lib/nvme/nvme_ns_cmd.o 00:01:51.307 CC lib/nvme/nvme_ns.o 00:01:51.307 CC lib/nvme/nvme_pcie_common.o 00:01:51.307 CC lib/nvme/nvme_pcie.o 00:01:51.307 CC lib/nvme/nvme_qpair.o 00:01:51.307 CC lib/nvme/nvme.o 00:01:51.307 CC lib/nvme/nvme_quirks.o 00:01:51.307 CC lib/nvme/nvme_transport.o 00:01:51.307 CC lib/nvme/nvme_discovery.o 00:01:51.307 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:51.307 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:51.307 CC lib/nvme/nvme_tcp.o 00:01:51.307 CC lib/nvme/nvme_opal.o 00:01:51.307 CC lib/nvme/nvme_io_msg.o 00:01:51.307 CC lib/nvme/nvme_poll_group.o 00:01:51.307 CC lib/nvme/nvme_zns.o 00:01:51.307 CC lib/nvme/nvme_stubs.o 00:01:51.307 CC lib/nvme/nvme_auth.o 00:01:51.307 CC lib/nvme/nvme_cuse.o 00:01:51.307 CC lib/nvme/nvme_vfio_user.o 00:01:51.307 CC lib/nvme/nvme_rdma.o 00:01:51.878 LIB libspdk_thread.a 00:01:51.878 SO libspdk_thread.so.10.1 00:01:51.878 SYMLINK libspdk_thread.so 00:01:52.448 CC lib/init/json_config.o 00:01:52.448 CC lib/init/subsystem.o 00:01:52.448 CC lib/init/subsystem_rpc.o 00:01:52.448 CC lib/init/rpc.o 00:01:52.448 CC lib/virtio/virtio.o 00:01:52.448 CC lib/virtio/virtio_vhost_user.o 00:01:52.448 CC lib/blob/blobstore.o 00:01:52.448 CC lib/virtio/virtio_vfio_user.o 00:01:52.448 CC lib/blob/request.o 00:01:52.448 CC lib/virtio/virtio_pci.o 00:01:52.448 CC lib/blob/zeroes.o 00:01:52.448 CC lib/blob/blob_bs_dev.o 00:01:52.448 CC lib/vfu_tgt/tgt_endpoint.o 00:01:52.448 CC lib/vfu_tgt/tgt_rpc.o 00:01:52.448 CC lib/accel/accel.o 00:01:52.448 CC lib/accel/accel_rpc.o 00:01:52.448 CC lib/accel/accel_sw.o 00:01:52.448 LIB libspdk_init.a 00:01:52.709 SO libspdk_init.so.5.0 00:01:52.709 LIB libspdk_virtio.a 00:01:52.709 LIB libspdk_vfu_tgt.a 00:01:52.709 SYMLINK libspdk_init.so 00:01:52.709 SO libspdk_virtio.so.7.0 00:01:52.709 SO libspdk_vfu_tgt.so.3.0 00:01:52.709 SYMLINK libspdk_vfu_tgt.so 00:01:52.709 SYMLINK libspdk_virtio.so 00:01:52.969 CC lib/event/app.o 00:01:52.969 CC lib/event/reactor.o 00:01:52.969 CC lib/event/log_rpc.o 00:01:52.969 CC lib/event/app_rpc.o 00:01:52.969 CC lib/event/scheduler_static.o 00:01:53.230 LIB libspdk_accel.a 00:01:53.230 SO libspdk_accel.so.15.1 00:01:53.230 LIB libspdk_nvme.a 00:01:53.230 SYMLINK libspdk_accel.so 00:01:53.492 SO libspdk_nvme.so.13.1 00:01:53.492 LIB libspdk_event.a 00:01:53.492 SO libspdk_event.so.14.0 00:01:53.492 SYMLINK libspdk_event.so 00:01:53.753 CC lib/bdev/bdev.o 00:01:53.753 CC lib/bdev/bdev_rpc.o 00:01:53.753 CC lib/bdev/bdev_zone.o 00:01:53.753 CC lib/bdev/part.o 00:01:53.753 CC lib/bdev/scsi_nvme.o 00:01:53.753 SYMLINK libspdk_nvme.so 00:01:54.696 LIB libspdk_blob.a 00:01:54.696 SO libspdk_blob.so.11.0 00:01:54.696 SYMLINK libspdk_blob.so 00:01:55.269 CC lib/lvol/lvol.o 00:01:55.269 CC lib/blobfs/blobfs.o 00:01:55.269 CC lib/blobfs/tree.o 00:01:55.842 LIB libspdk_bdev.a 00:01:55.842 LIB libspdk_blobfs.a 00:01:55.842 SO libspdk_bdev.so.15.1 00:01:55.842 SO libspdk_blobfs.so.10.0 00:01:56.103 LIB libspdk_lvol.a 00:01:56.103 SYMLINK libspdk_blobfs.so 00:01:56.103 SYMLINK libspdk_bdev.so 00:01:56.103 SO libspdk_lvol.so.10.0 00:01:56.103 SYMLINK libspdk_lvol.so 00:01:56.364 CC lib/ublk/ublk.o 00:01:56.364 CC lib/ublk/ublk_rpc.o 00:01:56.364 CC lib/nbd/nbd.o 00:01:56.364 CC lib/nvmf/ctrlr.o 00:01:56.364 CC lib/scsi/dev.o 00:01:56.364 CC lib/nbd/nbd_rpc.o 00:01:56.364 CC lib/ftl/ftl_core.o 00:01:56.364 CC lib/nvmf/ctrlr_discovery.o 00:01:56.365 CC lib/scsi/lun.o 00:01:56.365 CC lib/ftl/ftl_init.o 00:01:56.365 CC lib/nvmf/ctrlr_bdev.o 00:01:56.365 CC lib/scsi/port.o 00:01:56.365 CC lib/ftl/ftl_layout.o 00:01:56.365 CC lib/ftl/ftl_debug.o 00:01:56.365 CC lib/scsi/scsi.o 00:01:56.365 CC lib/nvmf/subsystem.o 00:01:56.365 CC lib/ftl/ftl_io.o 00:01:56.365 CC lib/nvmf/nvmf.o 00:01:56.365 CC lib/scsi/scsi_bdev.o 00:01:56.365 CC lib/ftl/ftl_sb.o 00:01:56.365 CC lib/scsi/scsi_pr.o 00:01:56.365 CC lib/nvmf/nvmf_rpc.o 00:01:56.365 CC lib/ftl/ftl_l2p.o 00:01:56.365 CC lib/scsi/scsi_rpc.o 00:01:56.365 CC lib/ftl/ftl_l2p_flat.o 00:01:56.365 CC lib/nvmf/transport.o 00:01:56.365 CC lib/scsi/task.o 00:01:56.365 CC lib/nvmf/tcp.o 00:01:56.365 CC lib/ftl/ftl_nv_cache.o 00:01:56.365 CC lib/nvmf/stubs.o 00:01:56.365 CC lib/ftl/ftl_band.o 00:01:56.365 CC lib/nvmf/mdns_server.o 00:01:56.365 CC lib/ftl/ftl_band_ops.o 00:01:56.365 CC lib/nvmf/vfio_user.o 00:01:56.365 CC lib/nvmf/rdma.o 00:01:56.365 CC lib/ftl/ftl_writer.o 00:01:56.365 CC lib/nvmf/auth.o 00:01:56.365 CC lib/ftl/ftl_rq.o 00:01:56.365 CC lib/ftl/ftl_reloc.o 00:01:56.365 CC lib/ftl/ftl_l2p_cache.o 00:01:56.365 CC lib/ftl/ftl_p2l.o 00:01:56.365 CC lib/ftl/mngt/ftl_mngt.o 00:01:56.365 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:56.365 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:56.365 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:56.365 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:56.365 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:56.365 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:56.365 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:56.365 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:56.365 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:56.365 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:56.365 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:56.365 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:56.365 CC lib/ftl/utils/ftl_conf.o 00:01:56.365 CC lib/ftl/utils/ftl_md.o 00:01:56.365 CC lib/ftl/utils/ftl_mempool.o 00:01:56.365 CC lib/ftl/utils/ftl_bitmap.o 00:01:56.365 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:56.365 CC lib/ftl/utils/ftl_property.o 00:01:56.365 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:56.365 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:56.365 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:56.365 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:56.365 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:56.365 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:56.365 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:56.365 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:56.365 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:56.365 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:56.365 CC lib/ftl/base/ftl_base_bdev.o 00:01:56.365 CC lib/ftl/base/ftl_base_dev.o 00:01:56.365 CC lib/ftl/ftl_trace.o 00:01:56.931 LIB libspdk_nbd.a 00:01:56.931 SO libspdk_nbd.so.7.0 00:01:56.931 SYMLINK libspdk_nbd.so 00:01:56.931 LIB libspdk_scsi.a 00:01:56.931 SO libspdk_scsi.so.9.0 00:01:57.192 LIB libspdk_ublk.a 00:01:57.192 SYMLINK libspdk_scsi.so 00:01:57.192 SO libspdk_ublk.so.3.0 00:01:57.192 SYMLINK libspdk_ublk.so 00:01:57.192 LIB libspdk_ftl.a 00:01:57.454 SO libspdk_ftl.so.9.0 00:01:57.454 CC lib/iscsi/conn.o 00:01:57.454 CC lib/vhost/vhost.o 00:01:57.454 CC lib/iscsi/init_grp.o 00:01:57.454 CC lib/vhost/vhost_rpc.o 00:01:57.454 CC lib/iscsi/iscsi.o 00:01:57.454 CC lib/vhost/vhost_scsi.o 00:01:57.454 CC lib/iscsi/md5.o 00:01:57.454 CC lib/vhost/vhost_blk.o 00:01:57.454 CC lib/iscsi/param.o 00:01:57.454 CC lib/vhost/rte_vhost_user.o 00:01:57.454 CC lib/iscsi/portal_grp.o 00:01:57.454 CC lib/iscsi/tgt_node.o 00:01:57.454 CC lib/iscsi/iscsi_subsystem.o 00:01:57.454 CC lib/iscsi/iscsi_rpc.o 00:01:57.454 CC lib/iscsi/task.o 00:01:57.715 SYMLINK libspdk_ftl.so 00:01:58.287 LIB libspdk_nvmf.a 00:01:58.287 SO libspdk_nvmf.so.18.1 00:01:58.547 LIB libspdk_vhost.a 00:01:58.547 SO libspdk_vhost.so.8.0 00:01:58.547 SYMLINK libspdk_nvmf.so 00:01:58.547 SYMLINK libspdk_vhost.so 00:01:58.808 LIB libspdk_iscsi.a 00:01:58.808 SO libspdk_iscsi.so.8.0 00:01:59.069 SYMLINK libspdk_iscsi.so 00:01:59.640 CC module/vfu_device/vfu_virtio.o 00:01:59.640 CC module/vfu_device/vfu_virtio_blk.o 00:01:59.640 CC module/vfu_device/vfu_virtio_scsi.o 00:01:59.640 CC module/vfu_device/vfu_virtio_rpc.o 00:01:59.640 CC module/env_dpdk/env_dpdk_rpc.o 00:01:59.640 LIB libspdk_env_dpdk_rpc.a 00:01:59.640 CC module/blob/bdev/blob_bdev.o 00:01:59.640 CC module/accel/dsa/accel_dsa.o 00:01:59.640 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:59.640 CC module/accel/dsa/accel_dsa_rpc.o 00:01:59.640 CC module/scheduler/gscheduler/gscheduler.o 00:01:59.640 CC module/sock/posix/posix.o 00:01:59.640 CC module/keyring/file/keyring.o 00:01:59.640 CC module/keyring/file/keyring_rpc.o 00:01:59.640 CC module/accel/ioat/accel_ioat.o 00:01:59.640 CC module/accel/iaa/accel_iaa.o 00:01:59.640 CC module/accel/ioat/accel_ioat_rpc.o 00:01:59.640 CC module/accel/iaa/accel_iaa_rpc.o 00:01:59.640 CC module/accel/error/accel_error.o 00:01:59.640 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:59.640 CC module/accel/error/accel_error_rpc.o 00:01:59.640 CC module/keyring/linux/keyring.o 00:01:59.640 CC module/keyring/linux/keyring_rpc.o 00:01:59.640 SO libspdk_env_dpdk_rpc.so.6.0 00:01:59.900 SYMLINK libspdk_env_dpdk_rpc.so 00:01:59.901 LIB libspdk_scheduler_dpdk_governor.a 00:01:59.901 LIB libspdk_scheduler_gscheduler.a 00:01:59.901 LIB libspdk_keyring_linux.a 00:01:59.901 LIB libspdk_keyring_file.a 00:01:59.901 SO libspdk_keyring_linux.so.1.0 00:01:59.901 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:59.901 SO libspdk_scheduler_gscheduler.so.4.0 00:01:59.901 LIB libspdk_scheduler_dynamic.a 00:01:59.901 LIB libspdk_accel_error.a 00:01:59.901 SO libspdk_keyring_file.so.1.0 00:01:59.901 LIB libspdk_accel_ioat.a 00:01:59.901 LIB libspdk_accel_iaa.a 00:01:59.901 LIB libspdk_accel_dsa.a 00:01:59.901 SO libspdk_accel_ioat.so.6.0 00:01:59.901 SO libspdk_scheduler_dynamic.so.4.0 00:01:59.901 LIB libspdk_blob_bdev.a 00:01:59.901 SYMLINK libspdk_scheduler_gscheduler.so 00:01:59.901 SYMLINK libspdk_keyring_linux.so 00:01:59.901 SO libspdk_accel_error.so.2.0 00:01:59.901 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:59.901 SYMLINK libspdk_keyring_file.so 00:01:59.901 SO libspdk_accel_iaa.so.3.0 00:01:59.901 SO libspdk_accel_dsa.so.5.0 00:01:59.901 SO libspdk_blob_bdev.so.11.0 00:02:00.162 SYMLINK libspdk_accel_ioat.so 00:02:00.162 SYMLINK libspdk_scheduler_dynamic.so 00:02:00.162 SYMLINK libspdk_accel_error.so 00:02:00.162 LIB libspdk_vfu_device.a 00:02:00.162 SYMLINK libspdk_accel_dsa.so 00:02:00.162 SYMLINK libspdk_accel_iaa.so 00:02:00.162 SYMLINK libspdk_blob_bdev.so 00:02:00.162 SO libspdk_vfu_device.so.3.0 00:02:00.162 SYMLINK libspdk_vfu_device.so 00:02:00.424 LIB libspdk_sock_posix.a 00:02:00.424 SO libspdk_sock_posix.so.6.0 00:02:00.424 SYMLINK libspdk_sock_posix.so 00:02:00.684 CC module/bdev/gpt/gpt.o 00:02:00.684 CC module/bdev/gpt/vbdev_gpt.o 00:02:00.684 CC module/bdev/malloc/bdev_malloc.o 00:02:00.684 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:00.684 CC module/bdev/lvol/vbdev_lvol.o 00:02:00.684 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:00.684 CC module/blobfs/bdev/blobfs_bdev.o 00:02:00.684 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:00.684 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:00.684 CC module/bdev/delay/vbdev_delay.o 00:02:00.684 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:00.684 CC module/bdev/error/vbdev_error.o 00:02:00.684 CC module/bdev/raid/bdev_raid.o 00:02:00.684 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:00.684 CC module/bdev/null/bdev_null.o 00:02:00.684 CC module/bdev/error/vbdev_error_rpc.o 00:02:00.684 CC module/bdev/passthru/vbdev_passthru.o 00:02:00.684 CC module/bdev/nvme/bdev_nvme.o 00:02:00.684 CC module/bdev/null/bdev_null_rpc.o 00:02:00.684 CC module/bdev/aio/bdev_aio.o 00:02:00.684 CC module/bdev/raid/bdev_raid_rpc.o 00:02:00.684 CC module/bdev/raid/bdev_raid_sb.o 00:02:00.684 CC module/bdev/aio/bdev_aio_rpc.o 00:02:00.684 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:00.684 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:00.684 CC module/bdev/raid/raid0.o 00:02:00.684 CC module/bdev/nvme/nvme_rpc.o 00:02:00.684 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:00.684 CC module/bdev/raid/concat.o 00:02:00.684 CC module/bdev/nvme/bdev_mdns_client.o 00:02:00.684 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:00.684 CC module/bdev/raid/raid1.o 00:02:00.684 CC module/bdev/split/vbdev_split.o 00:02:00.684 CC module/bdev/iscsi/bdev_iscsi.o 00:02:00.684 CC module/bdev/split/vbdev_split_rpc.o 00:02:00.684 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:00.684 CC module/bdev/nvme/vbdev_opal.o 00:02:00.684 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:00.684 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:00.684 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:00.684 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:00.684 CC module/bdev/ftl/bdev_ftl.o 00:02:00.945 LIB libspdk_blobfs_bdev.a 00:02:00.945 SO libspdk_blobfs_bdev.so.6.0 00:02:00.945 LIB libspdk_bdev_gpt.a 00:02:00.945 SO libspdk_bdev_gpt.so.6.0 00:02:00.945 LIB libspdk_bdev_split.a 00:02:00.945 LIB libspdk_bdev_error.a 00:02:00.945 LIB libspdk_bdev_null.a 00:02:00.945 SYMLINK libspdk_blobfs_bdev.so 00:02:00.945 LIB libspdk_bdev_passthru.a 00:02:00.945 SO libspdk_bdev_null.so.6.0 00:02:00.945 LIB libspdk_bdev_ftl.a 00:02:00.945 SO libspdk_bdev_split.so.6.0 00:02:00.945 SO libspdk_bdev_error.so.6.0 00:02:00.945 LIB libspdk_bdev_malloc.a 00:02:00.945 LIB libspdk_bdev_aio.a 00:02:00.945 LIB libspdk_bdev_iscsi.a 00:02:00.945 SYMLINK libspdk_bdev_gpt.so 00:02:00.945 LIB libspdk_bdev_zone_block.a 00:02:00.945 SO libspdk_bdev_passthru.so.6.0 00:02:01.207 SO libspdk_bdev_ftl.so.6.0 00:02:01.207 SO libspdk_bdev_malloc.so.6.0 00:02:01.207 LIB libspdk_bdev_delay.a 00:02:01.207 SO libspdk_bdev_aio.so.6.0 00:02:01.207 SYMLINK libspdk_bdev_null.so 00:02:01.207 SO libspdk_bdev_iscsi.so.6.0 00:02:01.207 SYMLINK libspdk_bdev_split.so 00:02:01.207 SO libspdk_bdev_zone_block.so.6.0 00:02:01.207 SYMLINK libspdk_bdev_error.so 00:02:01.207 SO libspdk_bdev_delay.so.6.0 00:02:01.207 SYMLINK libspdk_bdev_passthru.so 00:02:01.207 SYMLINK libspdk_bdev_malloc.so 00:02:01.207 SYMLINK libspdk_bdev_ftl.so 00:02:01.207 SYMLINK libspdk_bdev_zone_block.so 00:02:01.207 SYMLINK libspdk_bdev_iscsi.so 00:02:01.207 SYMLINK libspdk_bdev_aio.so 00:02:01.207 LIB libspdk_bdev_lvol.a 00:02:01.207 SYMLINK libspdk_bdev_delay.so 00:02:01.207 LIB libspdk_bdev_virtio.a 00:02:01.207 SO libspdk_bdev_lvol.so.6.0 00:02:01.207 SO libspdk_bdev_virtio.so.6.0 00:02:01.207 SYMLINK libspdk_bdev_lvol.so 00:02:01.469 SYMLINK libspdk_bdev_virtio.so 00:02:01.469 LIB libspdk_bdev_raid.a 00:02:01.730 SO libspdk_bdev_raid.so.6.0 00:02:01.730 SYMLINK libspdk_bdev_raid.so 00:02:02.676 LIB libspdk_bdev_nvme.a 00:02:02.676 SO libspdk_bdev_nvme.so.7.0 00:02:02.676 SYMLINK libspdk_bdev_nvme.so 00:02:03.671 CC module/event/subsystems/iobuf/iobuf.o 00:02:03.671 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:03.671 CC module/event/subsystems/vmd/vmd.o 00:02:03.671 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:03.671 CC module/event/subsystems/scheduler/scheduler.o 00:02:03.671 CC module/event/subsystems/keyring/keyring.o 00:02:03.671 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:03.671 CC module/event/subsystems/sock/sock.o 00:02:03.671 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:03.671 LIB libspdk_event_scheduler.a 00:02:03.671 LIB libspdk_event_keyring.a 00:02:03.671 LIB libspdk_event_vmd.a 00:02:03.671 LIB libspdk_event_iobuf.a 00:02:03.671 LIB libspdk_event_vhost_blk.a 00:02:03.671 LIB libspdk_event_sock.a 00:02:03.671 LIB libspdk_event_vfu_tgt.a 00:02:03.671 SO libspdk_event_scheduler.so.4.0 00:02:03.671 SO libspdk_event_keyring.so.1.0 00:02:03.671 SO libspdk_event_vhost_blk.so.3.0 00:02:03.671 SO libspdk_event_vmd.so.6.0 00:02:03.671 SO libspdk_event_sock.so.5.0 00:02:03.671 SO libspdk_event_iobuf.so.3.0 00:02:03.671 SO libspdk_event_vfu_tgt.so.3.0 00:02:03.671 SYMLINK libspdk_event_scheduler.so 00:02:03.671 SYMLINK libspdk_event_keyring.so 00:02:03.671 SYMLINK libspdk_event_vmd.so 00:02:03.974 SYMLINK libspdk_event_vhost_blk.so 00:02:03.974 SYMLINK libspdk_event_sock.so 00:02:03.974 SYMLINK libspdk_event_iobuf.so 00:02:03.974 SYMLINK libspdk_event_vfu_tgt.so 00:02:04.236 CC module/event/subsystems/accel/accel.o 00:02:04.236 LIB libspdk_event_accel.a 00:02:04.236 SO libspdk_event_accel.so.6.0 00:02:04.497 SYMLINK libspdk_event_accel.so 00:02:04.758 CC module/event/subsystems/bdev/bdev.o 00:02:05.018 LIB libspdk_event_bdev.a 00:02:05.018 SO libspdk_event_bdev.so.6.0 00:02:05.018 SYMLINK libspdk_event_bdev.so 00:02:05.590 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:05.590 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:05.590 CC module/event/subsystems/scsi/scsi.o 00:02:05.590 CC module/event/subsystems/ublk/ublk.o 00:02:05.590 CC module/event/subsystems/nbd/nbd.o 00:02:05.590 LIB libspdk_event_ublk.a 00:02:05.590 LIB libspdk_event_nbd.a 00:02:05.590 LIB libspdk_event_scsi.a 00:02:05.590 SO libspdk_event_ublk.so.3.0 00:02:05.590 SO libspdk_event_nbd.so.6.0 00:02:05.590 SO libspdk_event_scsi.so.6.0 00:02:05.590 LIB libspdk_event_nvmf.a 00:02:05.590 SYMLINK libspdk_event_nbd.so 00:02:05.590 SYMLINK libspdk_event_ublk.so 00:02:05.590 SO libspdk_event_nvmf.so.6.0 00:02:05.590 SYMLINK libspdk_event_scsi.so 00:02:05.851 SYMLINK libspdk_event_nvmf.so 00:02:06.121 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:06.121 CC module/event/subsystems/iscsi/iscsi.o 00:02:06.121 LIB libspdk_event_vhost_scsi.a 00:02:06.387 LIB libspdk_event_iscsi.a 00:02:06.387 SO libspdk_event_vhost_scsi.so.3.0 00:02:06.387 SO libspdk_event_iscsi.so.6.0 00:02:06.387 SYMLINK libspdk_event_vhost_scsi.so 00:02:06.387 SYMLINK libspdk_event_iscsi.so 00:02:06.648 SO libspdk.so.6.0 00:02:06.648 SYMLINK libspdk.so 00:02:06.908 CXX app/trace/trace.o 00:02:06.908 CC test/rpc_client/rpc_client_test.o 00:02:06.908 CC app/trace_record/trace_record.o 00:02:06.908 CC app/spdk_lspci/spdk_lspci.o 00:02:06.908 TEST_HEADER include/spdk/accel.h 00:02:06.908 TEST_HEADER include/spdk/accel_module.h 00:02:06.908 TEST_HEADER include/spdk/assert.h 00:02:06.908 TEST_HEADER include/spdk/barrier.h 00:02:06.908 TEST_HEADER include/spdk/base64.h 00:02:06.908 TEST_HEADER include/spdk/bdev.h 00:02:06.908 TEST_HEADER include/spdk/bdev_module.h 00:02:06.908 CC app/spdk_nvme_identify/identify.o 00:02:06.908 CC app/spdk_nvme_discover/discovery_aer.o 00:02:06.908 CC app/spdk_top/spdk_top.o 00:02:06.908 TEST_HEADER include/spdk/bdev_zone.h 00:02:06.908 TEST_HEADER include/spdk/bit_array.h 00:02:06.908 TEST_HEADER include/spdk/bit_pool.h 00:02:06.908 TEST_HEADER include/spdk/blob_bdev.h 00:02:06.908 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:06.908 CC app/spdk_nvme_perf/perf.o 00:02:06.908 TEST_HEADER include/spdk/blobfs.h 00:02:06.908 TEST_HEADER include/spdk/blob.h 00:02:06.908 TEST_HEADER include/spdk/conf.h 00:02:06.908 TEST_HEADER include/spdk/config.h 00:02:06.908 TEST_HEADER include/spdk/crc16.h 00:02:06.908 TEST_HEADER include/spdk/cpuset.h 00:02:06.908 TEST_HEADER include/spdk/crc32.h 00:02:06.908 TEST_HEADER include/spdk/crc64.h 00:02:06.908 TEST_HEADER include/spdk/dif.h 00:02:06.908 TEST_HEADER include/spdk/endian.h 00:02:06.908 TEST_HEADER include/spdk/dma.h 00:02:06.908 TEST_HEADER include/spdk/env_dpdk.h 00:02:06.908 TEST_HEADER include/spdk/env.h 00:02:06.908 TEST_HEADER include/spdk/event.h 00:02:06.908 TEST_HEADER include/spdk/fd_group.h 00:02:06.908 TEST_HEADER include/spdk/file.h 00:02:06.908 TEST_HEADER include/spdk/fd.h 00:02:06.908 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:06.908 TEST_HEADER include/spdk/ftl.h 00:02:06.908 TEST_HEADER include/spdk/gpt_spec.h 00:02:06.908 TEST_HEADER include/spdk/hexlify.h 00:02:06.908 TEST_HEADER include/spdk/histogram_data.h 00:02:06.908 TEST_HEADER include/spdk/idxd.h 00:02:06.908 TEST_HEADER include/spdk/idxd_spec.h 00:02:06.908 TEST_HEADER include/spdk/init.h 00:02:06.908 TEST_HEADER include/spdk/ioat.h 00:02:06.908 TEST_HEADER include/spdk/ioat_spec.h 00:02:06.908 CC app/spdk_dd/spdk_dd.o 00:02:06.908 CC app/iscsi_tgt/iscsi_tgt.o 00:02:06.908 TEST_HEADER include/spdk/iscsi_spec.h 00:02:06.908 TEST_HEADER include/spdk/json.h 00:02:06.908 TEST_HEADER include/spdk/jsonrpc.h 00:02:06.908 TEST_HEADER include/spdk/keyring.h 00:02:06.908 TEST_HEADER include/spdk/keyring_module.h 00:02:06.908 CC app/nvmf_tgt/nvmf_main.o 00:02:06.908 TEST_HEADER include/spdk/likely.h 00:02:06.908 TEST_HEADER include/spdk/log.h 00:02:06.908 TEST_HEADER include/spdk/memory.h 00:02:06.908 TEST_HEADER include/spdk/lvol.h 00:02:06.908 TEST_HEADER include/spdk/mmio.h 00:02:06.908 TEST_HEADER include/spdk/nbd.h 00:02:07.171 TEST_HEADER include/spdk/notify.h 00:02:07.171 TEST_HEADER include/spdk/nvme.h 00:02:07.171 TEST_HEADER include/spdk/nvme_intel.h 00:02:07.171 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:07.171 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:07.171 TEST_HEADER include/spdk/nvme_zns.h 00:02:07.171 TEST_HEADER include/spdk/nvme_spec.h 00:02:07.171 CC app/spdk_tgt/spdk_tgt.o 00:02:07.171 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:07.171 TEST_HEADER include/spdk/nvmf.h 00:02:07.171 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:07.171 TEST_HEADER include/spdk/nvmf_spec.h 00:02:07.171 TEST_HEADER include/spdk/nvmf_transport.h 00:02:07.171 TEST_HEADER include/spdk/opal.h 00:02:07.171 TEST_HEADER include/spdk/opal_spec.h 00:02:07.171 TEST_HEADER include/spdk/pipe.h 00:02:07.171 TEST_HEADER include/spdk/pci_ids.h 00:02:07.171 TEST_HEADER include/spdk/queue.h 00:02:07.171 TEST_HEADER include/spdk/reduce.h 00:02:07.171 TEST_HEADER include/spdk/rpc.h 00:02:07.171 TEST_HEADER include/spdk/scheduler.h 00:02:07.171 TEST_HEADER include/spdk/scsi.h 00:02:07.171 TEST_HEADER include/spdk/scsi_spec.h 00:02:07.171 TEST_HEADER include/spdk/sock.h 00:02:07.171 TEST_HEADER include/spdk/stdinc.h 00:02:07.171 TEST_HEADER include/spdk/string.h 00:02:07.171 TEST_HEADER include/spdk/thread.h 00:02:07.171 TEST_HEADER include/spdk/trace.h 00:02:07.171 TEST_HEADER include/spdk/trace_parser.h 00:02:07.171 TEST_HEADER include/spdk/tree.h 00:02:07.171 TEST_HEADER include/spdk/ublk.h 00:02:07.171 TEST_HEADER include/spdk/util.h 00:02:07.171 TEST_HEADER include/spdk/uuid.h 00:02:07.171 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:07.171 TEST_HEADER include/spdk/version.h 00:02:07.171 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:07.171 TEST_HEADER include/spdk/vhost.h 00:02:07.171 TEST_HEADER include/spdk/vmd.h 00:02:07.171 TEST_HEADER include/spdk/xor.h 00:02:07.171 CXX test/cpp_headers/accel.o 00:02:07.171 TEST_HEADER include/spdk/zipf.h 00:02:07.171 CXX test/cpp_headers/accel_module.o 00:02:07.171 CXX test/cpp_headers/assert.o 00:02:07.171 CXX test/cpp_headers/base64.o 00:02:07.171 CXX test/cpp_headers/barrier.o 00:02:07.171 CXX test/cpp_headers/bdev_module.o 00:02:07.171 CXX test/cpp_headers/bdev.o 00:02:07.171 CXX test/cpp_headers/bdev_zone.o 00:02:07.171 CXX test/cpp_headers/bit_array.o 00:02:07.171 CXX test/cpp_headers/bit_pool.o 00:02:07.171 CXX test/cpp_headers/blob_bdev.o 00:02:07.171 CXX test/cpp_headers/blobfs.o 00:02:07.171 CXX test/cpp_headers/blobfs_bdev.o 00:02:07.171 CXX test/cpp_headers/conf.o 00:02:07.171 CXX test/cpp_headers/config.o 00:02:07.171 CXX test/cpp_headers/blob.o 00:02:07.171 CXX test/cpp_headers/cpuset.o 00:02:07.171 CXX test/cpp_headers/crc16.o 00:02:07.171 CXX test/cpp_headers/crc32.o 00:02:07.171 CXX test/cpp_headers/dif.o 00:02:07.171 CXX test/cpp_headers/crc64.o 00:02:07.171 CXX test/cpp_headers/dma.o 00:02:07.171 CXX test/cpp_headers/endian.o 00:02:07.171 CXX test/cpp_headers/env_dpdk.o 00:02:07.171 CXX test/cpp_headers/event.o 00:02:07.171 CXX test/cpp_headers/env.o 00:02:07.171 CXX test/cpp_headers/fd_group.o 00:02:07.171 CXX test/cpp_headers/fd.o 00:02:07.171 CXX test/cpp_headers/file.o 00:02:07.171 CXX test/cpp_headers/gpt_spec.o 00:02:07.171 CXX test/cpp_headers/ftl.o 00:02:07.171 CXX test/cpp_headers/hexlify.o 00:02:07.171 CXX test/cpp_headers/histogram_data.o 00:02:07.171 CXX test/cpp_headers/idxd.o 00:02:07.171 CXX test/cpp_headers/idxd_spec.o 00:02:07.171 CXX test/cpp_headers/init.o 00:02:07.171 CXX test/cpp_headers/ioat.o 00:02:07.171 CXX test/cpp_headers/iscsi_spec.o 00:02:07.171 CXX test/cpp_headers/ioat_spec.o 00:02:07.171 CXX test/cpp_headers/jsonrpc.o 00:02:07.171 CXX test/cpp_headers/json.o 00:02:07.171 CXX test/cpp_headers/keyring_module.o 00:02:07.171 CXX test/cpp_headers/keyring.o 00:02:07.171 CXX test/cpp_headers/likely.o 00:02:07.171 CXX test/cpp_headers/lvol.o 00:02:07.171 CXX test/cpp_headers/memory.o 00:02:07.171 CXX test/cpp_headers/mmio.o 00:02:07.171 CXX test/cpp_headers/log.o 00:02:07.171 CXX test/cpp_headers/nbd.o 00:02:07.171 CC examples/util/zipf/zipf.o 00:02:07.171 CXX test/cpp_headers/notify.o 00:02:07.171 CXX test/cpp_headers/nvme.o 00:02:07.171 CXX test/cpp_headers/nvme_intel.o 00:02:07.171 CXX test/cpp_headers/nvme_ocssd.o 00:02:07.171 CXX test/cpp_headers/nvme_zns.o 00:02:07.171 CXX test/cpp_headers/nvmf_cmd.o 00:02:07.171 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:07.171 CC examples/ioat/perf/perf.o 00:02:07.171 CXX test/cpp_headers/nvme_spec.o 00:02:07.171 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:07.171 CXX test/cpp_headers/nvmf.o 00:02:07.171 CXX test/cpp_headers/nvmf_spec.o 00:02:07.171 CXX test/cpp_headers/pci_ids.o 00:02:07.171 CXX test/cpp_headers/nvmf_transport.o 00:02:07.171 CXX test/cpp_headers/opal.o 00:02:07.171 CXX test/cpp_headers/opal_spec.o 00:02:07.171 CXX test/cpp_headers/pipe.o 00:02:07.171 CC examples/ioat/verify/verify.o 00:02:07.171 CXX test/cpp_headers/queue.o 00:02:07.171 CXX test/cpp_headers/scsi.o 00:02:07.171 CXX test/cpp_headers/reduce.o 00:02:07.171 CXX test/cpp_headers/scsi_spec.o 00:02:07.171 CXX test/cpp_headers/rpc.o 00:02:07.171 CXX test/cpp_headers/sock.o 00:02:07.171 CXX test/cpp_headers/scheduler.o 00:02:07.171 CC test/thread/poller_perf/poller_perf.o 00:02:07.171 CXX test/cpp_headers/string.o 00:02:07.171 CXX test/cpp_headers/stdinc.o 00:02:07.171 CXX test/cpp_headers/thread.o 00:02:07.171 LINK spdk_lspci 00:02:07.171 CXX test/cpp_headers/trace.o 00:02:07.171 CXX test/cpp_headers/ublk.o 00:02:07.171 CXX test/cpp_headers/trace_parser.o 00:02:07.171 CXX test/cpp_headers/util.o 00:02:07.171 CXX test/cpp_headers/tree.o 00:02:07.171 CXX test/cpp_headers/uuid.o 00:02:07.171 CXX test/cpp_headers/version.o 00:02:07.171 CXX test/cpp_headers/vfio_user_pci.o 00:02:07.171 CXX test/cpp_headers/vfio_user_spec.o 00:02:07.438 CXX test/cpp_headers/vhost.o 00:02:07.438 CXX test/cpp_headers/xor.o 00:02:07.438 CXX test/cpp_headers/vmd.o 00:02:07.438 CC test/env/vtophys/vtophys.o 00:02:07.438 CXX test/cpp_headers/zipf.o 00:02:07.438 CC test/env/pci/pci_ut.o 00:02:07.438 CC test/app/stub/stub.o 00:02:07.438 CC app/fio/nvme/fio_plugin.o 00:02:07.438 LINK rpc_client_test 00:02:07.438 CC test/env/memory/memory_ut.o 00:02:07.438 CC test/app/histogram_perf/histogram_perf.o 00:02:07.438 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:07.438 CC test/app/jsoncat/jsoncat.o 00:02:07.438 CC test/dma/test_dma/test_dma.o 00:02:07.438 CC test/app/bdev_svc/bdev_svc.o 00:02:07.438 LINK interrupt_tgt 00:02:07.438 LINK spdk_nvme_discover 00:02:07.438 CC app/fio/bdev/fio_plugin.o 00:02:07.711 LINK spdk_trace_record 00:02:07.711 LINK spdk_tgt 00:02:07.711 LINK nvmf_tgt 00:02:07.973 LINK iscsi_tgt 00:02:07.973 CC test/env/mem_callbacks/mem_callbacks.o 00:02:07.973 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:08.234 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:08.234 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:08.234 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:08.234 LINK zipf 00:02:08.234 LINK jsoncat 00:02:08.234 LINK poller_perf 00:02:08.495 LINK spdk_dd 00:02:08.495 LINK bdev_svc 00:02:08.495 LINK vtophys 00:02:08.495 LINK histogram_perf 00:02:08.495 LINK spdk_trace 00:02:08.495 LINK env_dpdk_post_init 00:02:08.495 LINK test_dma 00:02:08.495 LINK stub 00:02:08.495 LINK ioat_perf 00:02:08.495 LINK verify 00:02:08.757 LINK nvme_fuzz 00:02:08.757 LINK pci_ut 00:02:08.757 LINK vhost_fuzz 00:02:08.757 LINK spdk_bdev 00:02:08.757 LINK spdk_nvme_perf 00:02:08.757 LINK spdk_nvme 00:02:08.757 LINK mem_callbacks 00:02:09.018 CC app/vhost/vhost.o 00:02:09.018 LINK spdk_top 00:02:09.018 CC examples/sock/hello_world/hello_sock.o 00:02:09.018 CC examples/idxd/perf/perf.o 00:02:09.018 CC examples/vmd/led/led.o 00:02:09.018 CC examples/vmd/lsvmd/lsvmd.o 00:02:09.018 LINK spdk_nvme_identify 00:02:09.018 CC test/event/reactor_perf/reactor_perf.o 00:02:09.018 CC test/event/event_perf/event_perf.o 00:02:09.018 CC test/event/reactor/reactor.o 00:02:09.018 CC examples/thread/thread/thread_ex.o 00:02:09.018 CC test/event/app_repeat/app_repeat.o 00:02:09.018 CC test/event/scheduler/scheduler.o 00:02:09.018 CC test/blobfs/mkfs/mkfs.o 00:02:09.018 CC test/nvme/aer/aer.o 00:02:09.018 CC test/nvme/startup/startup.o 00:02:09.018 CC test/nvme/err_injection/err_injection.o 00:02:09.018 CC test/nvme/e2edp/nvme_dp.o 00:02:09.018 CC test/nvme/reset/reset.o 00:02:09.018 CC test/nvme/reserve/reserve.o 00:02:09.018 CC test/nvme/cuse/cuse.o 00:02:09.018 CC test/nvme/compliance/nvme_compliance.o 00:02:09.279 CC test/nvme/boot_partition/boot_partition.o 00:02:09.279 CC test/nvme/overhead/overhead.o 00:02:09.279 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:09.279 LINK lsvmd 00:02:09.279 CC test/nvme/simple_copy/simple_copy.o 00:02:09.279 CC test/nvme/sgl/sgl.o 00:02:09.279 CC test/nvme/connect_stress/connect_stress.o 00:02:09.279 CC test/nvme/fused_ordering/fused_ordering.o 00:02:09.279 CC test/nvme/fdp/fdp.o 00:02:09.279 CC test/accel/dif/dif.o 00:02:09.279 LINK vhost 00:02:09.279 LINK led 00:02:09.279 LINK event_perf 00:02:09.279 LINK reactor 00:02:09.279 LINK reactor_perf 00:02:09.279 LINK app_repeat 00:02:09.279 LINK hello_sock 00:02:09.279 CC test/lvol/esnap/esnap.o 00:02:09.279 LINK thread 00:02:09.279 LINK scheduler 00:02:09.279 LINK idxd_perf 00:02:09.279 LINK memory_ut 00:02:09.279 LINK startup 00:02:09.279 LINK mkfs 00:02:09.279 LINK reserve 00:02:09.279 LINK doorbell_aers 00:02:09.279 LINK connect_stress 00:02:09.279 LINK boot_partition 00:02:09.540 LINK fused_ordering 00:02:09.540 LINK err_injection 00:02:09.540 LINK simple_copy 00:02:09.540 LINK aer 00:02:09.540 LINK sgl 00:02:09.540 LINK reset 00:02:09.540 LINK nvme_dp 00:02:09.540 LINK overhead 00:02:09.540 LINK nvme_compliance 00:02:09.540 LINK fdp 00:02:09.540 LINK dif 00:02:09.801 LINK iscsi_fuzz 00:02:09.801 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:09.801 CC examples/nvme/reconnect/reconnect.o 00:02:09.801 CC examples/nvme/hotplug/hotplug.o 00:02:09.801 CC examples/nvme/hello_world/hello_world.o 00:02:09.801 CC examples/nvme/abort/abort.o 00:02:09.801 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:09.801 CC examples/nvme/arbitration/arbitration.o 00:02:09.801 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:09.801 CC examples/accel/perf/accel_perf.o 00:02:10.062 CC examples/blob/cli/blobcli.o 00:02:10.062 CC examples/blob/hello_world/hello_blob.o 00:02:10.062 LINK pmr_persistence 00:02:10.062 LINK cmb_copy 00:02:10.062 LINK hotplug 00:02:10.062 LINK hello_world 00:02:10.062 LINK reconnect 00:02:10.062 LINK arbitration 00:02:10.062 LINK abort 00:02:10.323 LINK hello_blob 00:02:10.323 LINK nvme_manage 00:02:10.323 CC test/bdev/bdevio/bdevio.o 00:02:10.323 LINK cuse 00:02:10.323 LINK accel_perf 00:02:10.585 LINK blobcli 00:02:10.585 LINK bdevio 00:02:10.846 CC examples/bdev/hello_world/hello_bdev.o 00:02:10.846 CC examples/bdev/bdevperf/bdevperf.o 00:02:11.107 LINK hello_bdev 00:02:11.680 LINK bdevperf 00:02:12.252 CC examples/nvmf/nvmf/nvmf.o 00:02:12.513 LINK nvmf 00:02:13.457 LINK esnap 00:02:13.718 00:02:13.718 real 0m52.811s 00:02:13.718 user 6m51.511s 00:02:13.718 sys 5m33.081s 00:02:13.718 10:40:30 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:13.718 10:40:30 make -- common/autotest_common.sh@10 -- $ set +x 00:02:13.718 ************************************ 00:02:13.718 END TEST make 00:02:13.718 ************************************ 00:02:13.718 10:40:30 -- common/autotest_common.sh@1142 -- $ return 0 00:02:13.718 10:40:30 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:13.718 10:40:30 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:13.719 10:40:30 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:13.719 10:40:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.719 10:40:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:13.719 10:40:30 -- pm/common@44 -- $ pid=1753726 00:02:13.719 10:40:30 -- pm/common@50 -- $ kill -TERM 1753726 00:02:13.719 10:40:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.719 10:40:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:13.719 10:40:30 -- pm/common@44 -- $ pid=1753727 00:02:13.719 10:40:30 -- pm/common@50 -- $ kill -TERM 1753727 00:02:13.719 10:40:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.719 10:40:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:13.719 10:40:30 -- pm/common@44 -- $ pid=1753729 00:02:13.719 10:40:30 -- pm/common@50 -- $ kill -TERM 1753729 00:02:13.719 10:40:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.719 10:40:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:13.719 10:40:30 -- pm/common@44 -- $ pid=1753757 00:02:13.719 10:40:30 -- pm/common@50 -- $ sudo -E kill -TERM 1753757 00:02:13.981 10:40:30 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:13.981 10:40:30 -- nvmf/common.sh@7 -- # uname -s 00:02:13.981 10:40:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:13.981 10:40:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:13.981 10:40:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:13.981 10:40:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:13.981 10:40:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:13.981 10:40:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:13.981 10:40:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:13.981 10:40:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:13.981 10:40:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:13.981 10:40:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:13.981 10:40:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:13.981 10:40:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:13.981 10:40:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:13.981 10:40:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:13.981 10:40:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:13.981 10:40:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:13.981 10:40:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:13.981 10:40:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:13.981 10:40:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:13.981 10:40:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:13.981 10:40:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.981 10:40:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.981 10:40:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.981 10:40:30 -- paths/export.sh@5 -- # export PATH 00:02:13.981 10:40:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.981 10:40:30 -- nvmf/common.sh@47 -- # : 0 00:02:13.981 10:40:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:13.981 10:40:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:13.981 10:40:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:13.981 10:40:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:13.981 10:40:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:13.981 10:40:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:13.981 10:40:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:13.981 10:40:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:13.981 10:40:30 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:13.981 10:40:30 -- spdk/autotest.sh@32 -- # uname -s 00:02:13.981 10:40:30 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:13.981 10:40:30 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:13.981 10:40:30 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:13.981 10:40:30 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:13.981 10:40:30 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:13.981 10:40:30 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:13.981 10:40:30 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:13.981 10:40:30 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:13.981 10:40:30 -- spdk/autotest.sh@48 -- # udevadm_pid=1817630 00:02:13.981 10:40:30 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:13.981 10:40:30 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:13.981 10:40:30 -- pm/common@17 -- # local monitor 00:02:13.981 10:40:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.981 10:40:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.981 10:40:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.981 10:40:30 -- pm/common@21 -- # date +%s 00:02:13.981 10:40:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.981 10:40:30 -- pm/common@21 -- # date +%s 00:02:13.981 10:40:30 -- pm/common@25 -- # sleep 1 00:02:13.981 10:40:30 -- pm/common@21 -- # date +%s 00:02:13.981 10:40:30 -- pm/common@21 -- # date +%s 00:02:13.981 10:40:30 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720773630 00:02:13.981 10:40:30 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720773630 00:02:13.981 10:40:30 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720773630 00:02:13.981 10:40:30 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720773630 00:02:13.981 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720773630_collect-vmstat.pm.log 00:02:13.981 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720773630_collect-cpu-load.pm.log 00:02:13.981 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720773630_collect-cpu-temp.pm.log 00:02:13.981 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720773630_collect-bmc-pm.bmc.pm.log 00:02:14.925 10:40:31 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:14.925 10:40:31 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:14.925 10:40:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:14.925 10:40:31 -- common/autotest_common.sh@10 -- # set +x 00:02:14.925 10:40:31 -- spdk/autotest.sh@59 -- # create_test_list 00:02:14.925 10:40:31 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:14.925 10:40:31 -- common/autotest_common.sh@10 -- # set +x 00:02:15.186 10:40:31 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:15.186 10:40:31 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:15.186 10:40:31 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:15.186 10:40:31 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:15.186 10:40:31 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:15.186 10:40:31 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:15.186 10:40:31 -- common/autotest_common.sh@1455 -- # uname 00:02:15.186 10:40:31 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:15.186 10:40:31 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:15.186 10:40:31 -- common/autotest_common.sh@1475 -- # uname 00:02:15.186 10:40:31 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:15.186 10:40:31 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:15.186 10:40:31 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:15.186 10:40:31 -- spdk/autotest.sh@72 -- # hash lcov 00:02:15.186 10:40:31 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:15.186 10:40:31 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:15.186 --rc lcov_branch_coverage=1 00:02:15.186 --rc lcov_function_coverage=1 00:02:15.186 --rc genhtml_branch_coverage=1 00:02:15.186 --rc genhtml_function_coverage=1 00:02:15.186 --rc genhtml_legend=1 00:02:15.186 --rc geninfo_all_blocks=1 00:02:15.186 ' 00:02:15.186 10:40:31 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:15.186 --rc lcov_branch_coverage=1 00:02:15.186 --rc lcov_function_coverage=1 00:02:15.186 --rc genhtml_branch_coverage=1 00:02:15.186 --rc genhtml_function_coverage=1 00:02:15.186 --rc genhtml_legend=1 00:02:15.186 --rc geninfo_all_blocks=1 00:02:15.186 ' 00:02:15.186 10:40:31 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:15.186 --rc lcov_branch_coverage=1 00:02:15.186 --rc lcov_function_coverage=1 00:02:15.186 --rc genhtml_branch_coverage=1 00:02:15.186 --rc genhtml_function_coverage=1 00:02:15.186 --rc genhtml_legend=1 00:02:15.186 --rc geninfo_all_blocks=1 00:02:15.186 --no-external' 00:02:15.186 10:40:31 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:15.186 --rc lcov_branch_coverage=1 00:02:15.186 --rc lcov_function_coverage=1 00:02:15.186 --rc genhtml_branch_coverage=1 00:02:15.186 --rc genhtml_function_coverage=1 00:02:15.186 --rc genhtml_legend=1 00:02:15.186 --rc geninfo_all_blocks=1 00:02:15.186 --no-external' 00:02:15.186 10:40:31 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:15.186 lcov: LCOV version 1.14 00:02:15.186 10:40:32 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:20.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:20.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:20.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:20.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:20.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:20.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:20.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:20.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:20.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:20.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:20.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:20.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:20.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:20.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:20.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:20.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:20.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:20.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:20.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:20.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:20.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:20.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:20.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:20.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:20.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:20.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:20.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:20.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:20.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:20.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:20.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:20.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:20.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:20.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:20.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:20.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:20.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:20.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:20.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:20.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:20.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:20.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:20.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:20.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:20.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:20.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:20.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:20.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:20.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:20.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:20.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:20.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:20.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:20.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:20.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:20.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:20.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:20.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:20.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:20.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:20.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:20.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:38.622 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:38.622 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:45.212 10:41:01 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:45.212 10:41:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:45.212 10:41:01 -- common/autotest_common.sh@10 -- # set +x 00:02:45.212 10:41:01 -- spdk/autotest.sh@91 -- # rm -f 00:02:45.212 10:41:01 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:47.859 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:02:47.859 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:02:47.859 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:02:47.859 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:02:47.859 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:02:47.859 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:02:48.120 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:02:48.120 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:02:48.120 0000:65:00.0 (144d a80a): Already using the nvme driver 00:02:48.120 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:02:48.120 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:02:48.120 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:02:48.120 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:02:48.120 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:02:48.120 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:02:48.120 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:02:48.120 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:02:48.380 10:41:05 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:48.380 10:41:05 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:48.380 10:41:05 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:48.380 10:41:05 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:48.380 10:41:05 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:48.380 10:41:05 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:48.380 10:41:05 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:48.380 10:41:05 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:48.380 10:41:05 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:48.380 10:41:05 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:48.380 10:41:05 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:48.380 10:41:05 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:48.380 10:41:05 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:48.380 10:41:05 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:48.380 10:41:05 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:48.640 No valid GPT data, bailing 00:02:48.640 10:41:05 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:48.640 10:41:05 -- scripts/common.sh@391 -- # pt= 00:02:48.640 10:41:05 -- scripts/common.sh@392 -- # return 1 00:02:48.640 10:41:05 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:48.640 1+0 records in 00:02:48.640 1+0 records out 00:02:48.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00488102 s, 215 MB/s 00:02:48.640 10:41:05 -- spdk/autotest.sh@118 -- # sync 00:02:48.640 10:41:05 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:48.640 10:41:05 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:48.640 10:41:05 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:56.775 10:41:13 -- spdk/autotest.sh@124 -- # uname -s 00:02:56.775 10:41:13 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:56.775 10:41:13 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:56.775 10:41:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:56.775 10:41:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:56.775 10:41:13 -- common/autotest_common.sh@10 -- # set +x 00:02:56.775 ************************************ 00:02:56.775 START TEST setup.sh 00:02:56.775 ************************************ 00:02:56.775 10:41:13 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:56.775 * Looking for test storage... 00:02:56.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:56.775 10:41:13 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:56.775 10:41:13 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:56.775 10:41:13 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:56.775 10:41:13 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:56.775 10:41:13 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:56.775 10:41:13 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:56.775 ************************************ 00:02:56.775 START TEST acl 00:02:56.775 ************************************ 00:02:56.775 10:41:13 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:57.035 * Looking for test storage... 00:02:57.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:57.035 10:41:13 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:57.035 10:41:13 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:57.035 10:41:13 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:57.035 10:41:13 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:57.035 10:41:13 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:57.035 10:41:13 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:57.035 10:41:13 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:57.035 10:41:13 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:57.035 10:41:13 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:57.035 10:41:13 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:57.035 10:41:13 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:57.036 10:41:13 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:57.036 10:41:13 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:57.036 10:41:13 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:57.036 10:41:13 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:57.036 10:41:13 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:01.243 10:41:17 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:01.243 10:41:17 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:01.243 10:41:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.243 10:41:17 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:01.243 10:41:17 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:01.243 10:41:17 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:04.548 Hugepages 00:03:04.548 node hugesize free / total 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.548 00:03:04.548 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.548 10:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.549 10:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:04.549 10:41:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.549 10:41:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.549 10:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.549 10:41:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:04.549 10:41:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.549 10:41:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.549 10:41:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.549 10:41:21 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:04.549 10:41:21 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:04.549 10:41:21 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:04.549 10:41:21 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:04.549 10:41:21 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:04.549 ************************************ 00:03:04.549 START TEST denied 00:03:04.549 ************************************ 00:03:04.549 10:41:21 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:04.549 10:41:21 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:04.549 10:41:21 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:04.549 10:41:21 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:04.549 10:41:21 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:04.549 10:41:21 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:08.759 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:08.759 10:41:25 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:08.759 10:41:25 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:08.759 10:41:25 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:08.759 10:41:25 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:08.759 10:41:25 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:08.759 10:41:25 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:08.759 10:41:25 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:08.759 10:41:25 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:08.759 10:41:25 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:08.759 10:41:25 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:14.051 00:03:14.051 real 0m8.986s 00:03:14.051 user 0m3.058s 00:03:14.051 sys 0m5.128s 00:03:14.051 10:41:30 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:14.051 10:41:30 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:14.051 ************************************ 00:03:14.051 END TEST denied 00:03:14.051 ************************************ 00:03:14.051 10:41:30 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:14.051 10:41:30 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:14.051 10:41:30 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:14.051 10:41:30 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:14.051 10:41:30 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:14.051 ************************************ 00:03:14.051 START TEST allowed 00:03:14.051 ************************************ 00:03:14.051 10:41:30 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:14.051 10:41:30 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:14.051 10:41:30 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:14.051 10:41:30 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:14.051 10:41:30 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.051 10:41:30 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:19.418 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:19.418 10:41:36 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:19.418 10:41:36 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:19.418 10:41:36 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:19.418 10:41:36 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:19.418 10:41:36 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:23.627 00:03:23.627 real 0m9.727s 00:03:23.627 user 0m2.892s 00:03:23.627 sys 0m5.134s 00:03:23.627 10:41:40 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:23.627 10:41:40 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:23.627 ************************************ 00:03:23.627 END TEST allowed 00:03:23.627 ************************************ 00:03:23.627 10:41:40 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:23.627 00:03:23.627 real 0m26.624s 00:03:23.627 user 0m8.986s 00:03:23.627 sys 0m15.346s 00:03:23.627 10:41:40 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:23.627 10:41:40 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:23.627 ************************************ 00:03:23.627 END TEST acl 00:03:23.627 ************************************ 00:03:23.627 10:41:40 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:23.627 10:41:40 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:23.627 10:41:40 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:23.627 10:41:40 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:23.627 10:41:40 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:23.627 ************************************ 00:03:23.627 START TEST hugepages 00:03:23.627 ************************************ 00:03:23.627 10:41:40 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:23.627 * Looking for test storage... 00:03:23.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 103308464 kB' 'MemAvailable: 106559412 kB' 'Buffers: 2704 kB' 'Cached: 14366448 kB' 'SwapCached: 0 kB' 'Active: 11394396 kB' 'Inactive: 3514444 kB' 'Active(anon): 10983580 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543360 kB' 'Mapped: 199048 kB' 'Shmem: 10443892 kB' 'KReclaimable: 300040 kB' 'Slab: 1132064 kB' 'SReclaimable: 300040 kB' 'SUnreclaim: 832024 kB' 'KernelStack: 27392 kB' 'PageTables: 8848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460892 kB' 'Committed_AS: 12572652 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235364 kB' 'VmallocChunk: 0 kB' 'Percpu: 121536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4033908 kB' 'DirectMap2M: 29200384 kB' 'DirectMap1G: 102760448 kB' 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.627 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.628 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:23.629 10:41:40 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:23.629 10:41:40 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:23.629 10:41:40 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:23.629 10:41:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:23.629 ************************************ 00:03:23.629 START TEST default_setup 00:03:23.629 ************************************ 00:03:23.629 10:41:40 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:23.629 10:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:23.629 10:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:23.629 10:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:23.629 10:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:23.629 10:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:23.629 10:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:23.629 10:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:23.629 10:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:23.629 10:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:23.629 10:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:23.629 10:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:23.629 10:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:23.629 10:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:23.629 10:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:23.629 10:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:23.629 10:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:23.629 10:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:23.629 10:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:23.629 10:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:23.629 10:41:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:23.629 10:41:40 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.629 10:41:40 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:27.843 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:27.843 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:27.843 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:27.843 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:27.843 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:27.843 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:27.843 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:27.843 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:27.843 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:27.843 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:27.843 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:27.843 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:27.843 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:27.843 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:27.843 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:27.843 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:27.843 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105461204 kB' 'MemAvailable: 108712120 kB' 'Buffers: 2704 kB' 'Cached: 14366584 kB' 'SwapCached: 0 kB' 'Active: 11412664 kB' 'Inactive: 3514444 kB' 'Active(anon): 11001848 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560776 kB' 'Mapped: 199280 kB' 'Shmem: 10444028 kB' 'KReclaimable: 299976 kB' 'Slab: 1130736 kB' 'SReclaimable: 299976 kB' 'SUnreclaim: 830760 kB' 'KernelStack: 27344 kB' 'PageTables: 9000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12591900 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235236 kB' 'VmallocChunk: 0 kB' 'Percpu: 121536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4033908 kB' 'DirectMap2M: 29200384 kB' 'DirectMap1G: 102760448 kB' 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.843 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.844 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105464444 kB' 'MemAvailable: 108715360 kB' 'Buffers: 2704 kB' 'Cached: 14366588 kB' 'SwapCached: 0 kB' 'Active: 11411144 kB' 'Inactive: 3514444 kB' 'Active(anon): 11000328 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559792 kB' 'Mapped: 199248 kB' 'Shmem: 10444032 kB' 'KReclaimable: 299976 kB' 'Slab: 1130704 kB' 'SReclaimable: 299976 kB' 'SUnreclaim: 830728 kB' 'KernelStack: 27408 kB' 'PageTables: 8772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12593160 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235252 kB' 'VmallocChunk: 0 kB' 'Percpu: 121536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4033908 kB' 'DirectMap2M: 29200384 kB' 'DirectMap1G: 102760448 kB' 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.845 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105466616 kB' 'MemAvailable: 108717532 kB' 'Buffers: 2704 kB' 'Cached: 14366604 kB' 'SwapCached: 0 kB' 'Active: 11411332 kB' 'Inactive: 3514444 kB' 'Active(anon): 11000516 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559884 kB' 'Mapped: 199176 kB' 'Shmem: 10444048 kB' 'KReclaimable: 299976 kB' 'Slab: 1130704 kB' 'SReclaimable: 299976 kB' 'SUnreclaim: 830728 kB' 'KernelStack: 27264 kB' 'PageTables: 8848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12591572 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235220 kB' 'VmallocChunk: 0 kB' 'Percpu: 121536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4033908 kB' 'DirectMap2M: 29200384 kB' 'DirectMap1G: 102760448 kB' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.846 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:27.847 nr_hugepages=1024 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:27.847 resv_hugepages=0 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:27.847 surplus_hugepages=0 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:27.847 anon_hugepages=0 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.847 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105465872 kB' 'MemAvailable: 108716788 kB' 'Buffers: 2704 kB' 'Cached: 14366632 kB' 'SwapCached: 0 kB' 'Active: 11411764 kB' 'Inactive: 3514444 kB' 'Active(anon): 11000948 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560304 kB' 'Mapped: 199176 kB' 'Shmem: 10444076 kB' 'KReclaimable: 299976 kB' 'Slab: 1130704 kB' 'SReclaimable: 299976 kB' 'SUnreclaim: 830728 kB' 'KernelStack: 27424 kB' 'PageTables: 9064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12593204 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235364 kB' 'VmallocChunk: 0 kB' 'Percpu: 121536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4033908 kB' 'DirectMap2M: 29200384 kB' 'DirectMap1G: 102760448 kB' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.848 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53245688 kB' 'MemUsed: 12413320 kB' 'SwapCached: 0 kB' 'Active: 4465396 kB' 'Inactive: 3293756 kB' 'Active(anon): 4322744 kB' 'Inactive(anon): 0 kB' 'Active(file): 142652 kB' 'Inactive(file): 3293756 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7428868 kB' 'Mapped: 70988 kB' 'AnonPages: 333544 kB' 'Shmem: 3992460 kB' 'KernelStack: 15672 kB' 'PageTables: 5576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 185680 kB' 'Slab: 681172 kB' 'SReclaimable: 185680 kB' 'SUnreclaim: 495492 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.849 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.850 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.850 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.850 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.850 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.850 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.850 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.850 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.850 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:27.850 10:41:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:27.850 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.850 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.850 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.850 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.850 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:27.850 node0=1024 expecting 1024 00:03:27.850 10:41:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:27.850 00:03:27.850 real 0m4.123s 00:03:27.850 user 0m1.619s 00:03:27.850 sys 0m2.552s 00:03:27.850 10:41:44 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:27.850 10:41:44 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:27.850 ************************************ 00:03:27.850 END TEST default_setup 00:03:27.850 ************************************ 00:03:27.850 10:41:44 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:27.850 10:41:44 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:27.850 10:41:44 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:27.850 10:41:44 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:27.850 10:41:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:27.850 ************************************ 00:03:27.850 START TEST per_node_1G_alloc 00:03:27.850 ************************************ 00:03:27.850 10:41:44 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:27.850 10:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:27.850 10:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:27.850 10:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:27.850 10:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:27.850 10:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:27.850 10:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:27.850 10:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:27.850 10:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:27.850 10:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:27.850 10:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:27.850 10:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:27.850 10:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:27.850 10:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:27.850 10:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:27.850 10:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:27.850 10:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:27.850 10:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:27.850 10:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:27.850 10:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:27.850 10:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:27.850 10:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:27.850 10:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:27.850 10:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:27.850 10:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:27.850 10:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:27.850 10:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.850 10:41:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:31.223 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:31.223 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:31.223 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:31.223 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:31.223 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:31.223 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:31.223 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:31.223 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:31.223 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:31.223 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:31.223 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:31.223 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:31.223 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:31.223 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:31.223 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:31.223 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:31.223 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105499572 kB' 'MemAvailable: 108750488 kB' 'Buffers: 2704 kB' 'Cached: 14366740 kB' 'SwapCached: 0 kB' 'Active: 11410628 kB' 'Inactive: 3514444 kB' 'Active(anon): 10999812 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558300 kB' 'Mapped: 198300 kB' 'Shmem: 10444184 kB' 'KReclaimable: 299976 kB' 'Slab: 1130440 kB' 'SReclaimable: 299976 kB' 'SUnreclaim: 830464 kB' 'KernelStack: 27328 kB' 'PageTables: 8304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12580084 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235700 kB' 'VmallocChunk: 0 kB' 'Percpu: 121536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4033908 kB' 'DirectMap2M: 29200384 kB' 'DirectMap1G: 102760448 kB' 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.804 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.805 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105499452 kB' 'MemAvailable: 108750368 kB' 'Buffers: 2704 kB' 'Cached: 14366744 kB' 'SwapCached: 0 kB' 'Active: 11410124 kB' 'Inactive: 3514444 kB' 'Active(anon): 10999308 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558396 kB' 'Mapped: 198204 kB' 'Shmem: 10444188 kB' 'KReclaimable: 299976 kB' 'Slab: 1130384 kB' 'SReclaimable: 299976 kB' 'SUnreclaim: 830408 kB' 'KernelStack: 27440 kB' 'PageTables: 9024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12580204 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235748 kB' 'VmallocChunk: 0 kB' 'Percpu: 121536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4033908 kB' 'DirectMap2M: 29200384 kB' 'DirectMap1G: 102760448 kB' 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.806 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.807 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.808 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105503116 kB' 'MemAvailable: 108754032 kB' 'Buffers: 2704 kB' 'Cached: 14366764 kB' 'SwapCached: 0 kB' 'Active: 11410208 kB' 'Inactive: 3514444 kB' 'Active(anon): 10999392 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558400 kB' 'Mapped: 198204 kB' 'Shmem: 10444208 kB' 'KReclaimable: 299976 kB' 'Slab: 1130384 kB' 'SReclaimable: 299976 kB' 'SUnreclaim: 830408 kB' 'KernelStack: 27424 kB' 'PageTables: 9208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12580132 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235748 kB' 'VmallocChunk: 0 kB' 'Percpu: 121536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4033908 kB' 'DirectMap2M: 29200384 kB' 'DirectMap1G: 102760448 kB' 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.809 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.810 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:31.811 nr_hugepages=1024 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:31.811 resv_hugepages=0 00:03:31.811 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:31.811 surplus_hugepages=0 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:31.812 anon_hugepages=0 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105502996 kB' 'MemAvailable: 108753912 kB' 'Buffers: 2704 kB' 'Cached: 14366784 kB' 'SwapCached: 0 kB' 'Active: 11409744 kB' 'Inactive: 3514444 kB' 'Active(anon): 10998928 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557908 kB' 'Mapped: 198204 kB' 'Shmem: 10444228 kB' 'KReclaimable: 299976 kB' 'Slab: 1130384 kB' 'SReclaimable: 299976 kB' 'SUnreclaim: 830408 kB' 'KernelStack: 27312 kB' 'PageTables: 8604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12581760 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235716 kB' 'VmallocChunk: 0 kB' 'Percpu: 121536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4033908 kB' 'DirectMap2M: 29200384 kB' 'DirectMap1G: 102760448 kB' 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.812 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.813 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 54299048 kB' 'MemUsed: 11359960 kB' 'SwapCached: 0 kB' 'Active: 4465184 kB' 'Inactive: 3293756 kB' 'Active(anon): 4322532 kB' 'Inactive(anon): 0 kB' 'Active(file): 142652 kB' 'Inactive(file): 3293756 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7428984 kB' 'Mapped: 70512 kB' 'AnonPages: 333112 kB' 'Shmem: 3992576 kB' 'KernelStack: 15640 kB' 'PageTables: 5412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 185680 kB' 'Slab: 681064 kB' 'SReclaimable: 185680 kB' 'SUnreclaim: 495384 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.814 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.815 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 51202808 kB' 'MemUsed: 9477064 kB' 'SwapCached: 0 kB' 'Active: 6944936 kB' 'Inactive: 220688 kB' 'Active(anon): 6676772 kB' 'Inactive(anon): 0 kB' 'Active(file): 268164 kB' 'Inactive(file): 220688 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6940532 kB' 'Mapped: 127692 kB' 'AnonPages: 225156 kB' 'Shmem: 6451680 kB' 'KernelStack: 11672 kB' 'PageTables: 3172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114296 kB' 'Slab: 449312 kB' 'SReclaimable: 114296 kB' 'SUnreclaim: 335016 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.816 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.817 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:31.818 node0=512 expecting 512 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:31.818 node1=512 expecting 512 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:31.818 00:03:31.818 real 0m3.941s 00:03:31.818 user 0m1.581s 00:03:31.818 sys 0m2.420s 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:31.818 10:41:48 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:31.818 ************************************ 00:03:31.818 END TEST per_node_1G_alloc 00:03:31.818 ************************************ 00:03:31.818 10:41:48 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:31.818 10:41:48 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:31.818 10:41:48 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:31.818 10:41:48 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:31.818 10:41:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:32.081 ************************************ 00:03:32.081 START TEST even_2G_alloc 00:03:32.081 ************************************ 00:03:32.081 10:41:48 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:32.081 10:41:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:32.081 10:41:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:32.081 10:41:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:32.081 10:41:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:32.081 10:41:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:32.081 10:41:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:32.081 10:41:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:32.081 10:41:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:32.081 10:41:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:32.081 10:41:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:32.081 10:41:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:32.081 10:41:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:32.081 10:41:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:32.081 10:41:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:32.081 10:41:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:32.081 10:41:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:32.081 10:41:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:32.081 10:41:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:32.081 10:41:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:32.081 10:41:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:32.081 10:41:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:32.081 10:41:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:32.081 10:41:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:32.081 10:41:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:32.081 10:41:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:32.081 10:41:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:32.081 10:41:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.081 10:41:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:35.387 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:35.387 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:35.387 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:35.387 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:35.387 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:35.387 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:35.387 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:35.387 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:35.387 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:35.387 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:35.387 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:35.387 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:35.387 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:35.387 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:35.387 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:35.387 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:35.387 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105478988 kB' 'MemAvailable: 108729904 kB' 'Buffers: 2704 kB' 'Cached: 14366924 kB' 'SwapCached: 0 kB' 'Active: 11410628 kB' 'Inactive: 3514444 kB' 'Active(anon): 10999812 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558188 kB' 'Mapped: 198276 kB' 'Shmem: 10444368 kB' 'KReclaimable: 299976 kB' 'Slab: 1130080 kB' 'SReclaimable: 299976 kB' 'SUnreclaim: 830104 kB' 'KernelStack: 27296 kB' 'PageTables: 8700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12579608 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235588 kB' 'VmallocChunk: 0 kB' 'Percpu: 121536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4033908 kB' 'DirectMap2M: 29200384 kB' 'DirectMap1G: 102760448 kB' 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.653 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105480072 kB' 'MemAvailable: 108730988 kB' 'Buffers: 2704 kB' 'Cached: 14366928 kB' 'SwapCached: 0 kB' 'Active: 11410280 kB' 'Inactive: 3514444 kB' 'Active(anon): 10999464 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557840 kB' 'Mapped: 198260 kB' 'Shmem: 10444372 kB' 'KReclaimable: 299976 kB' 'Slab: 1130052 kB' 'SReclaimable: 299976 kB' 'SUnreclaim: 830076 kB' 'KernelStack: 27248 kB' 'PageTables: 8552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12579752 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235572 kB' 'VmallocChunk: 0 kB' 'Percpu: 121536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4033908 kB' 'DirectMap2M: 29200384 kB' 'DirectMap1G: 102760448 kB' 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.654 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.655 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105480020 kB' 'MemAvailable: 108730936 kB' 'Buffers: 2704 kB' 'Cached: 14366944 kB' 'SwapCached: 0 kB' 'Active: 11409472 kB' 'Inactive: 3514444 kB' 'Active(anon): 10998656 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557480 kB' 'Mapped: 198180 kB' 'Shmem: 10444388 kB' 'KReclaimable: 299976 kB' 'Slab: 1130004 kB' 'SReclaimable: 299976 kB' 'SUnreclaim: 830028 kB' 'KernelStack: 27232 kB' 'PageTables: 8480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12579776 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235572 kB' 'VmallocChunk: 0 kB' 'Percpu: 121536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4033908 kB' 'DirectMap2M: 29200384 kB' 'DirectMap1G: 102760448 kB' 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.656 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.657 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:35.658 nr_hugepages=1024 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:35.658 resv_hugepages=0 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:35.658 surplus_hugepages=0 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:35.658 anon_hugepages=0 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105481248 kB' 'MemAvailable: 108732164 kB' 'Buffers: 2704 kB' 'Cached: 14366984 kB' 'SwapCached: 0 kB' 'Active: 11410112 kB' 'Inactive: 3514444 kB' 'Active(anon): 10999296 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558120 kB' 'Mapped: 198180 kB' 'Shmem: 10444428 kB' 'KReclaimable: 299976 kB' 'Slab: 1130004 kB' 'SReclaimable: 299976 kB' 'SUnreclaim: 830028 kB' 'KernelStack: 27280 kB' 'PageTables: 8664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12580168 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235588 kB' 'VmallocChunk: 0 kB' 'Percpu: 121536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4033908 kB' 'DirectMap2M: 29200384 kB' 'DirectMap1G: 102760448 kB' 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.658 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.659 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.923 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.923 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.923 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.923 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.923 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.923 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.923 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.923 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.923 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.923 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.923 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.923 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.923 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.923 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.923 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.923 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.923 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.923 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.923 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.923 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.923 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.923 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.923 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.923 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.923 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.923 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.923 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.923 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.923 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.923 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.923 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.923 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.923 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.923 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.923 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.923 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 54300248 kB' 'MemUsed: 11358760 kB' 'SwapCached: 0 kB' 'Active: 4465180 kB' 'Inactive: 3293756 kB' 'Active(anon): 4322528 kB' 'Inactive(anon): 0 kB' 'Active(file): 142652 kB' 'Inactive(file): 3293756 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7429140 kB' 'Mapped: 70512 kB' 'AnonPages: 332924 kB' 'Shmem: 3992732 kB' 'KernelStack: 15688 kB' 'PageTables: 5552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 185680 kB' 'Slab: 680712 kB' 'SReclaimable: 185680 kB' 'SUnreclaim: 495032 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.924 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 51180584 kB' 'MemUsed: 9499288 kB' 'SwapCached: 0 kB' 'Active: 6945152 kB' 'Inactive: 220688 kB' 'Active(anon): 6676988 kB' 'Inactive(anon): 0 kB' 'Active(file): 268164 kB' 'Inactive(file): 220688 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6940568 kB' 'Mapped: 127668 kB' 'AnonPages: 225400 kB' 'Shmem: 6451716 kB' 'KernelStack: 11624 kB' 'PageTables: 3208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114296 kB' 'Slab: 449292 kB' 'SReclaimable: 114296 kB' 'SUnreclaim: 334996 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.925 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.926 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:35.927 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:35.927 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:35.927 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:35.927 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:35.927 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:35.927 node0=512 expecting 512 00:03:35.927 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:35.927 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:35.927 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:35.927 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:35.927 node1=512 expecting 512 00:03:35.927 10:41:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:35.927 00:03:35.927 real 0m3.893s 00:03:35.927 user 0m1.585s 00:03:35.927 sys 0m2.375s 00:03:35.927 10:41:52 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:35.927 10:41:52 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:35.927 ************************************ 00:03:35.927 END TEST even_2G_alloc 00:03:35.927 ************************************ 00:03:35.927 10:41:52 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:35.927 10:41:52 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:35.927 10:41:52 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:35.927 10:41:52 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:35.927 10:41:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:35.927 ************************************ 00:03:35.927 START TEST odd_alloc 00:03:35.927 ************************************ 00:03:35.927 10:41:52 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:35.927 10:41:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:35.927 10:41:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:35.927 10:41:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:35.927 10:41:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:35.927 10:41:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:35.927 10:41:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:35.927 10:41:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:35.927 10:41:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:35.927 10:41:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:35.927 10:41:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:35.927 10:41:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:35.927 10:41:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:35.927 10:41:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:35.927 10:41:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:35.927 10:41:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:35.927 10:41:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:35.927 10:41:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:35.927 10:41:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:35.927 10:41:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:35.927 10:41:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:35.927 10:41:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:35.927 10:41:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:35.927 10:41:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:35.927 10:41:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:35.927 10:41:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:35.927 10:41:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:35.927 10:41:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.927 10:41:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:39.252 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:39.252 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:39.252 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:39.252 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:39.252 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:39.252 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:39.252 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:39.252 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:39.252 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:39.252 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:39.252 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:39.252 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:39.252 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:39.252 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:39.252 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:39.252 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:39.252 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:39.513 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:39.513 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:39.513 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:39.513 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:39.513 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:39.513 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:39.513 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:39.513 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:39.513 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:39.513 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:39.513 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:39.513 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:39.513 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.513 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.513 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.513 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.513 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.513 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.513 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.513 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.513 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105471432 kB' 'MemAvailable: 108722348 kB' 'Buffers: 2704 kB' 'Cached: 14367120 kB' 'SwapCached: 0 kB' 'Active: 11411888 kB' 'Inactive: 3514444 kB' 'Active(anon): 11001072 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559376 kB' 'Mapped: 198368 kB' 'Shmem: 10444564 kB' 'KReclaimable: 299976 kB' 'Slab: 1129788 kB' 'SReclaimable: 299976 kB' 'SUnreclaim: 829812 kB' 'KernelStack: 27312 kB' 'PageTables: 8788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12581056 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235540 kB' 'VmallocChunk: 0 kB' 'Percpu: 121536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4033908 kB' 'DirectMap2M: 29200384 kB' 'DirectMap1G: 102760448 kB' 00:03:39.513 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.513 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.514 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.781 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105472484 kB' 'MemAvailable: 108723400 kB' 'Buffers: 2704 kB' 'Cached: 14367124 kB' 'SwapCached: 0 kB' 'Active: 11411532 kB' 'Inactive: 3514444 kB' 'Active(anon): 11000716 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558996 kB' 'Mapped: 198304 kB' 'Shmem: 10444568 kB' 'KReclaimable: 299976 kB' 'Slab: 1129788 kB' 'SReclaimable: 299976 kB' 'SUnreclaim: 829812 kB' 'KernelStack: 27280 kB' 'PageTables: 8680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12581076 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235524 kB' 'VmallocChunk: 0 kB' 'Percpu: 121536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4033908 kB' 'DirectMap2M: 29200384 kB' 'DirectMap1G: 102760448 kB' 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.782 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105473240 kB' 'MemAvailable: 108724156 kB' 'Buffers: 2704 kB' 'Cached: 14367128 kB' 'SwapCached: 0 kB' 'Active: 11410564 kB' 'Inactive: 3514444 kB' 'Active(anon): 10999748 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558496 kB' 'Mapped: 198224 kB' 'Shmem: 10444572 kB' 'KReclaimable: 299976 kB' 'Slab: 1129780 kB' 'SReclaimable: 299976 kB' 'SUnreclaim: 829804 kB' 'KernelStack: 27280 kB' 'PageTables: 8660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12581096 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235524 kB' 'VmallocChunk: 0 kB' 'Percpu: 121536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4033908 kB' 'DirectMap2M: 29200384 kB' 'DirectMap1G: 102760448 kB' 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.783 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.784 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:39.785 nr_hugepages=1025 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:39.785 resv_hugepages=0 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:39.785 surplus_hugepages=0 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:39.785 anon_hugepages=0 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105474336 kB' 'MemAvailable: 108725252 kB' 'Buffers: 2704 kB' 'Cached: 14367180 kB' 'SwapCached: 0 kB' 'Active: 11410564 kB' 'Inactive: 3514444 kB' 'Active(anon): 10999748 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558420 kB' 'Mapped: 198224 kB' 'Shmem: 10444624 kB' 'KReclaimable: 299976 kB' 'Slab: 1129780 kB' 'SReclaimable: 299976 kB' 'SUnreclaim: 829804 kB' 'KernelStack: 27264 kB' 'PageTables: 8608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12581116 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235524 kB' 'VmallocChunk: 0 kB' 'Percpu: 121536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4033908 kB' 'DirectMap2M: 29200384 kB' 'DirectMap1G: 102760448 kB' 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.785 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.786 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 54302396 kB' 'MemUsed: 11356612 kB' 'SwapCached: 0 kB' 'Active: 4466820 kB' 'Inactive: 3293756 kB' 'Active(anon): 4324168 kB' 'Inactive(anon): 0 kB' 'Active(file): 142652 kB' 'Inactive(file): 3293756 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7429328 kB' 'Mapped: 70540 kB' 'AnonPages: 334484 kB' 'Shmem: 3992920 kB' 'KernelStack: 15656 kB' 'PageTables: 5448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 185680 kB' 'Slab: 680644 kB' 'SReclaimable: 185680 kB' 'SUnreclaim: 494964 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.787 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 51172016 kB' 'MemUsed: 9507856 kB' 'SwapCached: 0 kB' 'Active: 6943764 kB' 'Inactive: 220688 kB' 'Active(anon): 6675600 kB' 'Inactive(anon): 0 kB' 'Active(file): 268164 kB' 'Inactive(file): 220688 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6940576 kB' 'Mapped: 127684 kB' 'AnonPages: 223936 kB' 'Shmem: 6451724 kB' 'KernelStack: 11608 kB' 'PageTables: 3160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114296 kB' 'Slab: 449136 kB' 'SReclaimable: 114296 kB' 'SUnreclaim: 334840 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.788 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.789 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.790 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.790 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.790 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.790 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.790 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.790 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.790 10:41:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:39.790 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:39.790 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:39.790 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:39.790 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:39.790 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:39.790 node0=512 expecting 513 00:03:39.790 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:39.790 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:39.790 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:39.790 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:39.790 node1=513 expecting 512 00:03:39.790 10:41:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:39.790 00:03:39.790 real 0m3.880s 00:03:39.790 user 0m1.574s 00:03:39.790 sys 0m2.367s 00:03:39.790 10:41:56 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:39.790 10:41:56 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:39.790 ************************************ 00:03:39.790 END TEST odd_alloc 00:03:39.790 ************************************ 00:03:39.790 10:41:56 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:39.790 10:41:56 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:39.790 10:41:56 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:39.790 10:41:56 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.790 10:41:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:39.790 ************************************ 00:03:39.790 START TEST custom_alloc 00:03:39.790 ************************************ 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.790 10:41:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:44.006 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:44.006 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:44.006 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:44.006 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:44.006 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:44.006 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:44.006 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:44.006 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:44.006 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:44.006 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:44.006 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:44.006 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:44.006 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:44.006 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:44.006 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:44.006 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:44.006 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104426040 kB' 'MemAvailable: 107676940 kB' 'Buffers: 2704 kB' 'Cached: 14367296 kB' 'SwapCached: 0 kB' 'Active: 11425884 kB' 'Inactive: 3514444 kB' 'Active(anon): 11015068 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573756 kB' 'Mapped: 199224 kB' 'Shmem: 10444740 kB' 'KReclaimable: 299944 kB' 'Slab: 1129808 kB' 'SReclaimable: 299944 kB' 'SUnreclaim: 829864 kB' 'KernelStack: 27456 kB' 'PageTables: 9440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12601816 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235676 kB' 'VmallocChunk: 0 kB' 'Percpu: 121536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4033908 kB' 'DirectMap2M: 29200384 kB' 'DirectMap1G: 102760448 kB' 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.006 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104426900 kB' 'MemAvailable: 107677800 kB' 'Buffers: 2704 kB' 'Cached: 14367300 kB' 'SwapCached: 0 kB' 'Active: 11416384 kB' 'Inactive: 3514444 kB' 'Active(anon): 11005568 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564368 kB' 'Mapped: 198812 kB' 'Shmem: 10444744 kB' 'KReclaimable: 299944 kB' 'Slab: 1129768 kB' 'SReclaimable: 299944 kB' 'SUnreclaim: 829824 kB' 'KernelStack: 27584 kB' 'PageTables: 9484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12589368 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235620 kB' 'VmallocChunk: 0 kB' 'Percpu: 121536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4033908 kB' 'DirectMap2M: 29200384 kB' 'DirectMap1G: 102760448 kB' 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.008 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104423596 kB' 'MemAvailable: 107674496 kB' 'Buffers: 2704 kB' 'Cached: 14367316 kB' 'SwapCached: 0 kB' 'Active: 11420416 kB' 'Inactive: 3514444 kB' 'Active(anon): 11009600 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568276 kB' 'Mapped: 198768 kB' 'Shmem: 10444760 kB' 'KReclaimable: 299944 kB' 'Slab: 1129784 kB' 'SReclaimable: 299944 kB' 'SUnreclaim: 829840 kB' 'KernelStack: 27472 kB' 'PageTables: 9096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12595736 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235608 kB' 'VmallocChunk: 0 kB' 'Percpu: 121536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4033908 kB' 'DirectMap2M: 29200384 kB' 'DirectMap1G: 102760448 kB' 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:44.011 nr_hugepages=1536 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:44.011 resv_hugepages=0 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:44.011 surplus_hugepages=0 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:44.011 anon_hugepages=0 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104422376 kB' 'MemAvailable: 107673276 kB' 'Buffers: 2704 kB' 'Cached: 14367340 kB' 'SwapCached: 0 kB' 'Active: 11414648 kB' 'Inactive: 3514444 kB' 'Active(anon): 11003832 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562040 kB' 'Mapped: 198612 kB' 'Shmem: 10444784 kB' 'KReclaimable: 299944 kB' 'Slab: 1129784 kB' 'SReclaimable: 299944 kB' 'SUnreclaim: 829840 kB' 'KernelStack: 27312 kB' 'PageTables: 8768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12587380 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235540 kB' 'VmallocChunk: 0 kB' 'Percpu: 121536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4033908 kB' 'DirectMap2M: 29200384 kB' 'DirectMap1G: 102760448 kB' 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 54297592 kB' 'MemUsed: 11361416 kB' 'SwapCached: 0 kB' 'Active: 4472424 kB' 'Inactive: 3293756 kB' 'Active(anon): 4329772 kB' 'Inactive(anon): 0 kB' 'Active(file): 142652 kB' 'Inactive(file): 3293756 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7429392 kB' 'Mapped: 71020 kB' 'AnonPages: 340080 kB' 'Shmem: 3992984 kB' 'KernelStack: 15800 kB' 'PageTables: 6148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 185648 kB' 'Slab: 680764 kB' 'SReclaimable: 185648 kB' 'SUnreclaim: 495116 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.013 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 50124528 kB' 'MemUsed: 10555344 kB' 'SwapCached: 0 kB' 'Active: 6945236 kB' 'Inactive: 220688 kB' 'Active(anon): 6677072 kB' 'Inactive(anon): 0 kB' 'Active(file): 268164 kB' 'Inactive(file): 220688 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6940688 kB' 'Mapped: 127748 kB' 'AnonPages: 225356 kB' 'Shmem: 6451836 kB' 'KernelStack: 11720 kB' 'PageTables: 3572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114296 kB' 'Slab: 449020 kB' 'SReclaimable: 114296 kB' 'SUnreclaim: 334724 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.014 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:44.015 node0=512 expecting 512 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:44.015 node1=1024 expecting 1024 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:44.015 00:03:44.015 real 0m3.934s 00:03:44.015 user 0m1.626s 00:03:44.015 sys 0m2.370s 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:44.015 10:42:00 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:44.015 ************************************ 00:03:44.015 END TEST custom_alloc 00:03:44.015 ************************************ 00:03:44.015 10:42:00 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:44.015 10:42:00 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:44.015 10:42:00 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.015 10:42:00 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.015 10:42:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:44.015 ************************************ 00:03:44.015 START TEST no_shrink_alloc 00:03:44.015 ************************************ 00:03:44.015 10:42:00 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:44.015 10:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:44.015 10:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:44.015 10:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:44.015 10:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:44.015 10:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:44.015 10:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:44.015 10:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:44.015 10:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:44.015 10:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:44.015 10:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:44.015 10:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:44.016 10:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:44.016 10:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:44.016 10:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:44.016 10:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:44.016 10:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:44.016 10:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:44.016 10:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:44.016 10:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:44.016 10:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:44.016 10:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.016 10:42:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:47.316 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:47.316 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:47.316 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:47.316 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:47.316 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:47.316 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:47.316 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:47.316 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:47.316 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:47.316 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:47.316 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:47.316 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:47.316 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:47.316 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:47.316 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:47.316 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:47.316 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:47.579 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:47.579 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:47.579 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.579 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.579 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:47.579 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:47.579 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:47.579 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.579 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.579 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.579 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:47.579 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.579 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.579 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.579 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.579 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.579 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.579 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.579 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.579 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.579 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105388120 kB' 'MemAvailable: 108639020 kB' 'Buffers: 2704 kB' 'Cached: 14367468 kB' 'SwapCached: 0 kB' 'Active: 11420668 kB' 'Inactive: 3514444 kB' 'Active(anon): 11009852 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568208 kB' 'Mapped: 199196 kB' 'Shmem: 10444912 kB' 'KReclaimable: 299944 kB' 'Slab: 1130108 kB' 'SReclaimable: 299944 kB' 'SUnreclaim: 830164 kB' 'KernelStack: 27392 kB' 'PageTables: 8992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12592132 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235640 kB' 'VmallocChunk: 0 kB' 'Percpu: 121536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4033908 kB' 'DirectMap2M: 29200384 kB' 'DirectMap1G: 102760448 kB' 00:03:47.579 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.579 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.579 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.579 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.579 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.579 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.579 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.580 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105389344 kB' 'MemAvailable: 108640244 kB' 'Buffers: 2704 kB' 'Cached: 14367472 kB' 'SwapCached: 0 kB' 'Active: 11420312 kB' 'Inactive: 3514444 kB' 'Active(anon): 11009496 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567900 kB' 'Mapped: 199128 kB' 'Shmem: 10444916 kB' 'KReclaimable: 299944 kB' 'Slab: 1130108 kB' 'SReclaimable: 299944 kB' 'SUnreclaim: 830164 kB' 'KernelStack: 27376 kB' 'PageTables: 8936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12592152 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235592 kB' 'VmallocChunk: 0 kB' 'Percpu: 121536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4033908 kB' 'DirectMap2M: 29200384 kB' 'DirectMap1G: 102760448 kB' 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.581 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.582 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105388520 kB' 'MemAvailable: 108639420 kB' 'Buffers: 2704 kB' 'Cached: 14367472 kB' 'SwapCached: 0 kB' 'Active: 11421336 kB' 'Inactive: 3514444 kB' 'Active(anon): 11010520 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568924 kB' 'Mapped: 199128 kB' 'Shmem: 10444916 kB' 'KReclaimable: 299944 kB' 'Slab: 1130184 kB' 'SReclaimable: 299944 kB' 'SUnreclaim: 830240 kB' 'KernelStack: 27344 kB' 'PageTables: 8844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12613216 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235576 kB' 'VmallocChunk: 0 kB' 'Percpu: 121536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4033908 kB' 'DirectMap2M: 29200384 kB' 'DirectMap1G: 102760448 kB' 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.583 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.584 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.584 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.584 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.584 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.584 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.584 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.584 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.584 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.846 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.846 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.846 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.846 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.846 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.846 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.846 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.846 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.846 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.846 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.846 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.846 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.846 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.846 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.846 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.846 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.846 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.846 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.846 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.846 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.846 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.846 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.846 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.846 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.846 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.846 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:47.847 nr_hugepages=1024 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:47.847 resv_hugepages=0 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:47.847 surplus_hugepages=0 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:47.847 anon_hugepages=0 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105389184 kB' 'MemAvailable: 108640084 kB' 'Buffers: 2704 kB' 'Cached: 14367512 kB' 'SwapCached: 0 kB' 'Active: 11420480 kB' 'Inactive: 3514444 kB' 'Active(anon): 11009664 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568020 kB' 'Mapped: 199128 kB' 'Shmem: 10444956 kB' 'KReclaimable: 299944 kB' 'Slab: 1130184 kB' 'SReclaimable: 299944 kB' 'SUnreclaim: 830240 kB' 'KernelStack: 27280 kB' 'PageTables: 8532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12591828 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235528 kB' 'VmallocChunk: 0 kB' 'Percpu: 121536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4033908 kB' 'DirectMap2M: 29200384 kB' 'DirectMap1G: 102760448 kB' 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.847 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.848 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53177384 kB' 'MemUsed: 12481624 kB' 'SwapCached: 0 kB' 'Active: 4473940 kB' 'Inactive: 3293756 kB' 'Active(anon): 4331288 kB' 'Inactive(anon): 0 kB' 'Active(file): 142652 kB' 'Inactive(file): 3293756 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7429412 kB' 'Mapped: 71392 kB' 'AnonPages: 341432 kB' 'Shmem: 3993004 kB' 'KernelStack: 15640 kB' 'PageTables: 5348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 185648 kB' 'Slab: 681116 kB' 'SReclaimable: 185648 kB' 'SUnreclaim: 495468 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.849 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:47.850 node0=1024 expecting 1024 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.850 10:42:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:51.150 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:51.150 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:51.150 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:51.150 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:51.150 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:51.150 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:51.150 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:51.150 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:51.150 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:51.150 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:51.150 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:51.150 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:51.150 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:51.150 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:51.150 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:51.150 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:51.150 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:51.411 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105419228 kB' 'MemAvailable: 108670128 kB' 'Buffers: 2704 kB' 'Cached: 14367624 kB' 'SwapCached: 0 kB' 'Active: 11427072 kB' 'Inactive: 3514444 kB' 'Active(anon): 11016256 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573908 kB' 'Mapped: 199312 kB' 'Shmem: 10445068 kB' 'KReclaimable: 299944 kB' 'Slab: 1129980 kB' 'SReclaimable: 299944 kB' 'SUnreclaim: 830036 kB' 'KernelStack: 27328 kB' 'PageTables: 8704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12599280 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235612 kB' 'VmallocChunk: 0 kB' 'Percpu: 121536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4033908 kB' 'DirectMap2M: 29200384 kB' 'DirectMap1G: 102760448 kB' 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.411 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.412 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.413 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.413 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.413 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.413 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.413 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.413 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.413 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.413 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.413 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.413 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.413 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:51.413 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:51.413 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105420036 kB' 'MemAvailable: 108670936 kB' 'Buffers: 2704 kB' 'Cached: 14367624 kB' 'SwapCached: 0 kB' 'Active: 11426500 kB' 'Inactive: 3514444 kB' 'Active(anon): 11015684 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573968 kB' 'Mapped: 199164 kB' 'Shmem: 10445068 kB' 'KReclaimable: 299944 kB' 'Slab: 1130028 kB' 'SReclaimable: 299944 kB' 'SUnreclaim: 830084 kB' 'KernelStack: 27328 kB' 'PageTables: 8736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12599300 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235580 kB' 'VmallocChunk: 0 kB' 'Percpu: 121536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4033908 kB' 'DirectMap2M: 29200384 kB' 'DirectMap1G: 102760448 kB' 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.677 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.679 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105423584 kB' 'MemAvailable: 108674484 kB' 'Buffers: 2704 kB' 'Cached: 14367664 kB' 'SwapCached: 0 kB' 'Active: 11427364 kB' 'Inactive: 3514444 kB' 'Active(anon): 11016548 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574840 kB' 'Mapped: 199164 kB' 'Shmem: 10445108 kB' 'KReclaimable: 299944 kB' 'Slab: 1130004 kB' 'SReclaimable: 299944 kB' 'SUnreclaim: 830060 kB' 'KernelStack: 27392 kB' 'PageTables: 9012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12599692 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235580 kB' 'VmallocChunk: 0 kB' 'Percpu: 121536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4033908 kB' 'DirectMap2M: 29200384 kB' 'DirectMap1G: 102760448 kB' 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.681 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:51.682 nr_hugepages=1024 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:51.682 resv_hugepages=0 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:51.682 surplus_hugepages=0 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:51.682 anon_hugepages=0 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105423152 kB' 'MemAvailable: 108674052 kB' 'Buffers: 2704 kB' 'Cached: 14367704 kB' 'SwapCached: 0 kB' 'Active: 11426660 kB' 'Inactive: 3514444 kB' 'Active(anon): 11015844 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574032 kB' 'Mapped: 199164 kB' 'Shmem: 10445148 kB' 'KReclaimable: 299944 kB' 'Slab: 1130004 kB' 'SReclaimable: 299944 kB' 'SUnreclaim: 830060 kB' 'KernelStack: 27360 kB' 'PageTables: 8912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12599712 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235564 kB' 'VmallocChunk: 0 kB' 'Percpu: 121536 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4033908 kB' 'DirectMap2M: 29200384 kB' 'DirectMap1G: 102760448 kB' 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.682 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.683 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53171952 kB' 'MemUsed: 12487056 kB' 'SwapCached: 0 kB' 'Active: 4473280 kB' 'Inactive: 3293756 kB' 'Active(anon): 4330628 kB' 'Inactive(anon): 0 kB' 'Active(file): 142652 kB' 'Inactive(file): 3293756 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7429480 kB' 'Mapped: 71412 kB' 'AnonPages: 340708 kB' 'Shmem: 3993072 kB' 'KernelStack: 15672 kB' 'PageTables: 5528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 185648 kB' 'Slab: 681116 kB' 'SReclaimable: 185648 kB' 'SUnreclaim: 495468 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.684 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.685 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:51.686 node0=1024 expecting 1024 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:51.686 00:03:51.686 real 0m7.758s 00:03:51.686 user 0m3.021s 00:03:51.686 sys 0m4.867s 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:51.686 10:42:08 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:51.686 ************************************ 00:03:51.686 END TEST no_shrink_alloc 00:03:51.686 ************************************ 00:03:51.686 10:42:08 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:51.686 10:42:08 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:51.686 10:42:08 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:51.686 10:42:08 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:51.686 10:42:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:51.686 10:42:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:51.686 10:42:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:51.686 10:42:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:51.686 10:42:08 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:51.686 10:42:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:51.686 10:42:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:51.686 10:42:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:51.686 10:42:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:51.686 10:42:08 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:51.686 10:42:08 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:51.686 00:03:51.686 real 0m28.178s 00:03:51.686 user 0m11.243s 00:03:51.686 sys 0m17.400s 00:03:51.686 10:42:08 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:51.686 10:42:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:51.686 ************************************ 00:03:51.686 END TEST hugepages 00:03:51.686 ************************************ 00:03:51.686 10:42:08 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:51.686 10:42:08 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:51.686 10:42:08 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:51.686 10:42:08 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.686 10:42:08 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:51.686 ************************************ 00:03:51.686 START TEST driver 00:03:51.686 ************************************ 00:03:51.686 10:42:08 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:51.948 * Looking for test storage... 00:03:51.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:51.948 10:42:08 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:51.948 10:42:08 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:51.948 10:42:08 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:57.242 10:42:13 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:57.242 10:42:13 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.242 10:42:13 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.242 10:42:13 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:57.242 ************************************ 00:03:57.242 START TEST guess_driver 00:03:57.242 ************************************ 00:03:57.242 10:42:13 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:57.242 10:42:13 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:57.242 10:42:13 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:57.242 10:42:13 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:57.242 10:42:13 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:57.242 10:42:13 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:57.242 10:42:13 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:57.242 10:42:13 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:57.242 10:42:13 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:57.242 10:42:13 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:57.242 10:42:13 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 314 > 0 )) 00:03:57.242 10:42:13 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:57.242 10:42:13 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:57.242 10:42:13 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:57.242 10:42:13 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:57.242 10:42:13 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:57.242 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:57.242 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:57.242 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:57.242 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:57.242 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:57.242 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:57.242 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:57.242 10:42:13 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:57.242 10:42:13 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:57.242 10:42:13 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:57.242 10:42:13 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:57.242 10:42:13 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:57.242 Looking for driver=vfio-pci 00:03:57.242 10:42:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.242 10:42:13 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:57.242 10:42:13 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.242 10:42:13 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.546 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.808 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.808 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.808 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.808 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.808 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.808 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.808 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.808 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.808 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.808 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.808 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.808 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:01.073 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:01.073 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:01.073 10:42:17 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:01.073 10:42:17 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:06.447 00:04:06.447 real 0m9.109s 00:04:06.447 user 0m2.940s 00:04:06.447 sys 0m5.277s 00:04:06.447 10:42:23 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.447 10:42:23 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:06.447 ************************************ 00:04:06.447 END TEST guess_driver 00:04:06.447 ************************************ 00:04:06.447 10:42:23 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:06.447 00:04:06.447 real 0m14.420s 00:04:06.447 user 0m4.474s 00:04:06.447 sys 0m8.178s 00:04:06.447 10:42:23 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.447 10:42:23 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:06.447 ************************************ 00:04:06.447 END TEST driver 00:04:06.447 ************************************ 00:04:06.447 10:42:23 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:06.447 10:42:23 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:06.447 10:42:23 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.447 10:42:23 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.447 10:42:23 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:06.447 ************************************ 00:04:06.447 START TEST devices 00:04:06.447 ************************************ 00:04:06.447 10:42:23 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:06.447 * Looking for test storage... 00:04:06.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:06.447 10:42:23 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:06.447 10:42:23 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:06.447 10:42:23 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:06.447 10:42:23 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:10.656 10:42:27 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:10.656 10:42:27 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:10.656 10:42:27 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:10.656 10:42:27 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:10.656 10:42:27 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:10.656 10:42:27 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:10.656 10:42:27 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:10.656 10:42:27 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:10.656 10:42:27 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:10.656 10:42:27 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:10.656 10:42:27 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:10.656 10:42:27 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:10.656 10:42:27 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:10.656 10:42:27 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:10.656 10:42:27 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:10.656 10:42:27 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:10.656 10:42:27 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:10.656 10:42:27 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:10.656 10:42:27 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:10.656 10:42:27 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:10.656 10:42:27 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:10.656 10:42:27 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:10.656 No valid GPT data, bailing 00:04:10.656 10:42:27 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:10.656 10:42:27 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:10.656 10:42:27 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:10.656 10:42:27 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:10.656 10:42:27 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:10.656 10:42:27 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:10.656 10:42:27 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:04:10.656 10:42:27 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:10.656 10:42:27 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:10.656 10:42:27 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:10.656 10:42:27 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:10.656 10:42:27 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:10.656 10:42:27 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:10.656 10:42:27 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.656 10:42:27 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.656 10:42:27 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:10.656 ************************************ 00:04:10.656 START TEST nvme_mount 00:04:10.656 ************************************ 00:04:10.656 10:42:27 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:10.656 10:42:27 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:10.656 10:42:27 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:10.656 10:42:27 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:10.656 10:42:27 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:10.657 10:42:27 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:10.657 10:42:27 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:10.657 10:42:27 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:10.657 10:42:27 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:10.657 10:42:27 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:10.657 10:42:27 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:10.657 10:42:27 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:10.657 10:42:27 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:10.657 10:42:27 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:10.657 10:42:27 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:10.657 10:42:27 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:10.657 10:42:27 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:10.657 10:42:27 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:10.657 10:42:27 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:10.657 10:42:27 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:11.600 Creating new GPT entries in memory. 00:04:11.600 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:11.600 other utilities. 00:04:11.600 10:42:28 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:11.600 10:42:28 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:11.600 10:42:28 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:11.600 10:42:28 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:11.600 10:42:28 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:12.542 Creating new GPT entries in memory. 00:04:12.542 The operation has completed successfully. 00:04:12.542 10:42:29 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:12.542 10:42:29 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:12.542 10:42:29 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1858546 00:04:12.542 10:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.542 10:42:29 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:12.542 10:42:29 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.542 10:42:29 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:12.542 10:42:29 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:12.804 10:42:29 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.804 10:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:12.804 10:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:12.804 10:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:12.804 10:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.804 10:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:12.804 10:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:12.804 10:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:12.804 10:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:12.804 10:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:12.804 10:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.804 10:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:12.804 10:42:29 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:12.804 10:42:29 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.804 10:42:29 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:16.108 10:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:16.108 10:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.108 10:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:16.108 10:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.108 10:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:16.108 10:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.108 10:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:16.108 10:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.108 10:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:16.108 10:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.108 10:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:16.108 10:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.108 10:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:16.108 10:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.108 10:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:16.108 10:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.108 10:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:16.108 10:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:16.108 10:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:16.108 10:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.108 10:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:16.108 10:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.108 10:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:16.108 10:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.108 10:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:16.108 10:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.108 10:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:16.108 10:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.108 10:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:16.108 10:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.108 10:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:16.108 10:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.108 10:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:16.108 10:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.108 10:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:16.108 10:42:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.368 10:42:33 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:16.368 10:42:33 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:16.368 10:42:33 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:16.368 10:42:33 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:16.368 10:42:33 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:16.368 10:42:33 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:16.368 10:42:33 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:16.368 10:42:33 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:16.368 10:42:33 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:16.368 10:42:33 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:16.368 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:16.368 10:42:33 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:16.368 10:42:33 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:16.629 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:16.629 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:16.629 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:16.629 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:16.629 10:42:33 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:16.629 10:42:33 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:16.629 10:42:33 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:16.629 10:42:33 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:16.629 10:42:33 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:16.629 10:42:33 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:16.629 10:42:33 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:16.629 10:42:33 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:16.629 10:42:33 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:16.629 10:42:33 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:16.629 10:42:33 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:16.629 10:42:33 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:16.629 10:42:33 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:16.629 10:42:33 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:16.629 10:42:33 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:16.629 10:42:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.629 10:42:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:16.629 10:42:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:16.629 10:42:33 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.629 10:42:33 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:19.930 10:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.930 10:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.930 10:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.930 10:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.930 10:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.930 10:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.930 10:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.930 10:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.930 10:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.930 10:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.930 10:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.930 10:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.930 10:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.930 10:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.930 10:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.930 10:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.930 10:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.930 10:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:19.930 10:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:19.930 10:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.930 10:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.930 10:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.930 10:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.930 10:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.930 10:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.930 10:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.930 10:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.930 10:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.930 10:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.930 10:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.930 10:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.930 10:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.930 10:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.930 10:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.930 10:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.930 10:42:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.500 10:42:37 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:20.500 10:42:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:20.500 10:42:37 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.500 10:42:37 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:20.500 10:42:37 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:20.500 10:42:37 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.500 10:42:37 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:20.500 10:42:37 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:20.500 10:42:37 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:20.500 10:42:37 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:20.500 10:42:37 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:20.500 10:42:37 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:20.500 10:42:37 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:20.500 10:42:37 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:20.500 10:42:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.500 10:42:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:20.500 10:42:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:20.500 10:42:37 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.500 10:42:37 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:23.801 10:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.801 10:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.801 10:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.801 10:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.801 10:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.801 10:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.801 10:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.801 10:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.801 10:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.801 10:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.801 10:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.801 10:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.801 10:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.801 10:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.801 10:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.801 10:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.801 10:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.801 10:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:23.801 10:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:23.801 10:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.801 10:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.801 10:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.801 10:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.801 10:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.801 10:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.801 10:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.801 10:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.801 10:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.801 10:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.801 10:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.801 10:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.801 10:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.801 10:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.801 10:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.801 10:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.801 10:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.062 10:42:41 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:24.062 10:42:41 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:24.062 10:42:41 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:24.062 10:42:41 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:24.062 10:42:41 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.062 10:42:41 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:24.062 10:42:41 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:24.062 10:42:41 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:24.062 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:24.062 00:04:24.062 real 0m13.594s 00:04:24.062 user 0m4.308s 00:04:24.062 sys 0m7.173s 00:04:24.062 10:42:41 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.062 10:42:41 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:24.062 ************************************ 00:04:24.062 END TEST nvme_mount 00:04:24.062 ************************************ 00:04:24.324 10:42:41 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:24.324 10:42:41 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:24.324 10:42:41 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.324 10:42:41 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.324 10:42:41 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:24.324 ************************************ 00:04:24.324 START TEST dm_mount 00:04:24.324 ************************************ 00:04:24.324 10:42:41 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:24.324 10:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:24.324 10:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:24.324 10:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:24.324 10:42:41 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:24.324 10:42:41 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:24.324 10:42:41 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:24.324 10:42:41 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:24.324 10:42:41 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:24.324 10:42:41 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:24.324 10:42:41 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:24.324 10:42:41 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:24.324 10:42:41 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:24.324 10:42:41 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:24.324 10:42:41 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:24.324 10:42:41 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:24.324 10:42:41 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:24.324 10:42:41 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:24.324 10:42:41 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:24.324 10:42:41 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:24.324 10:42:41 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:24.324 10:42:41 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:25.290 Creating new GPT entries in memory. 00:04:25.290 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:25.290 other utilities. 00:04:25.290 10:42:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:25.290 10:42:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:25.290 10:42:42 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:25.290 10:42:42 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:25.290 10:42:42 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:26.233 Creating new GPT entries in memory. 00:04:26.233 The operation has completed successfully. 00:04:26.233 10:42:43 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:26.233 10:42:43 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:26.233 10:42:43 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:26.233 10:42:43 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:26.233 10:42:43 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:27.619 The operation has completed successfully. 00:04:27.619 10:42:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:27.619 10:42:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:27.619 10:42:44 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1863729 00:04:27.619 10:42:44 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:27.619 10:42:44 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:27.619 10:42:44 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:27.619 10:42:44 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:27.619 10:42:44 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:27.619 10:42:44 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:27.619 10:42:44 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:27.619 10:42:44 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:27.619 10:42:44 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:27.619 10:42:44 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:27.619 10:42:44 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:27.619 10:42:44 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:27.619 10:42:44 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:27.619 10:42:44 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:27.619 10:42:44 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:27.619 10:42:44 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:27.619 10:42:44 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:27.619 10:42:44 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:27.619 10:42:44 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:27.619 10:42:44 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:27.619 10:42:44 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:27.619 10:42:44 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:27.619 10:42:44 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:27.619 10:42:44 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:27.619 10:42:44 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:27.620 10:42:44 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:27.620 10:42:44 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:27.620 10:42:44 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:27.620 10:42:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.620 10:42:44 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:27.620 10:42:44 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:27.620 10:42:44 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.620 10:42:44 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:30.922 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.922 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.922 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.922 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.922 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.922 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.922 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.922 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.922 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.922 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.922 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.922 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.922 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.922 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.922 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.922 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.922 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.922 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:30.922 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:30.922 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.922 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.922 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.922 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.922 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.922 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.922 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.922 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.922 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.922 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.922 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.922 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.922 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.922 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.922 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.922 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.922 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.184 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:31.184 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:31.184 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:31.184 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:31.184 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:31.184 10:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:31.184 10:42:48 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:31.184 10:42:48 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:31.184 10:42:48 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:31.184 10:42:48 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:31.184 10:42:48 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:31.184 10:42:48 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:31.184 10:42:48 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:31.184 10:42:48 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:31.184 10:42:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.184 10:42:48 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:31.184 10:42:48 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:31.184 10:42:48 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.184 10:42:48 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:34.500 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.500 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.500 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.500 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.500 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.500 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.500 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.500 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.500 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.500 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.500 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.500 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.500 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.500 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.500 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.500 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.500 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.500 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:34.500 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:34.500 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.500 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.500 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.500 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.500 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.500 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.500 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.500 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.500 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.500 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.500 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.500 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.500 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.500 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.500 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.500 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.500 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.766 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:34.766 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:34.766 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:34.766 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:34.766 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:34.766 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:34.766 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:34.766 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:34.766 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:34.766 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:34.766 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:34.766 10:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:34.766 00:04:34.766 real 0m10.635s 00:04:34.766 user 0m2.797s 00:04:34.766 sys 0m4.892s 00:04:34.766 10:42:51 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.766 10:42:51 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:34.766 ************************************ 00:04:34.766 END TEST dm_mount 00:04:34.766 ************************************ 00:04:35.028 10:42:51 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:35.028 10:42:51 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:35.028 10:42:51 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:35.028 10:42:51 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.028 10:42:51 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:35.028 10:42:51 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:35.028 10:42:51 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:35.028 10:42:51 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:35.289 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:35.289 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:35.289 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:35.289 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:35.289 10:42:52 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:35.289 10:42:52 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:35.289 10:42:52 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:35.289 10:42:52 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:35.289 10:42:52 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:35.289 10:42:52 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:35.289 10:42:52 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:35.289 00:04:35.289 real 0m28.918s 00:04:35.289 user 0m8.767s 00:04:35.289 sys 0m14.975s 00:04:35.289 10:42:52 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.289 10:42:52 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:35.289 ************************************ 00:04:35.289 END TEST devices 00:04:35.289 ************************************ 00:04:35.289 10:42:52 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:35.289 00:04:35.289 real 1m38.578s 00:04:35.289 user 0m33.630s 00:04:35.289 sys 0m56.206s 00:04:35.289 10:42:52 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.289 10:42:52 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:35.289 ************************************ 00:04:35.289 END TEST setup.sh 00:04:35.289 ************************************ 00:04:35.289 10:42:52 -- common/autotest_common.sh@1142 -- # return 0 00:04:35.289 10:42:52 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:38.591 Hugepages 00:04:38.591 node hugesize free / total 00:04:38.591 node0 1048576kB 0 / 0 00:04:38.591 node0 2048kB 2048 / 2048 00:04:38.591 node1 1048576kB 0 / 0 00:04:38.591 node1 2048kB 0 / 0 00:04:38.591 00:04:38.591 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:38.591 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:38.591 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:38.591 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:38.591 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:38.591 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:38.591 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:38.591 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:38.591 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:38.853 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:38.853 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:38.853 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:38.853 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:38.853 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:38.853 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:38.853 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:38.853 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:38.853 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:38.853 10:42:55 -- spdk/autotest.sh@130 -- # uname -s 00:04:38.853 10:42:55 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:38.853 10:42:55 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:38.853 10:42:55 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:42.219 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:42.219 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:42.219 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:42.219 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:42.219 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:42.219 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:42.219 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:42.219 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:42.219 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:42.481 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:42.481 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:42.481 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:42.481 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:42.481 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:42.481 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:42.481 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:44.395 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:44.395 10:43:01 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:45.779 10:43:02 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:45.779 10:43:02 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:45.780 10:43:02 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:45.780 10:43:02 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:45.780 10:43:02 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:45.780 10:43:02 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:45.780 10:43:02 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:45.780 10:43:02 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:45.780 10:43:02 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:45.780 10:43:02 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:45.780 10:43:02 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:04:45.780 10:43:02 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:49.080 Waiting for block devices as requested 00:04:49.080 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:49.080 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:49.080 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:49.080 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:49.341 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:49.341 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:49.341 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:49.602 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:49.602 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:49.863 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:49.863 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:49.863 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:50.123 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:50.123 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:50.123 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:50.384 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:50.384 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:50.644 10:43:07 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:50.644 10:43:07 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:50.644 10:43:07 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:50.644 10:43:07 -- common/autotest_common.sh@1502 -- # grep 0000:65:00.0/nvme/nvme 00:04:50.644 10:43:07 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:50.644 10:43:07 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:50.644 10:43:07 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:50.644 10:43:07 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:50.644 10:43:07 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:50.644 10:43:07 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:50.644 10:43:07 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:50.644 10:43:07 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:50.644 10:43:07 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:50.644 10:43:07 -- common/autotest_common.sh@1545 -- # oacs=' 0x5f' 00:04:50.644 10:43:07 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:50.644 10:43:07 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:50.644 10:43:07 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:50.644 10:43:07 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:50.644 10:43:07 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:50.644 10:43:07 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:50.644 10:43:07 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:50.644 10:43:07 -- common/autotest_common.sh@1557 -- # continue 00:04:50.644 10:43:07 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:50.644 10:43:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:50.644 10:43:07 -- common/autotest_common.sh@10 -- # set +x 00:04:50.644 10:43:07 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:50.645 10:43:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:50.645 10:43:07 -- common/autotest_common.sh@10 -- # set +x 00:04:50.645 10:43:07 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:54.852 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:54.852 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:54.852 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:54.852 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:54.852 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:54.852 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:54.852 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:54.852 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:54.852 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:54.852 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:54.852 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:54.852 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:54.852 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:54.852 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:54.852 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:54.852 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:54.852 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:54.852 10:43:11 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:54.852 10:43:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:54.852 10:43:11 -- common/autotest_common.sh@10 -- # set +x 00:04:54.852 10:43:11 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:54.852 10:43:11 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:54.852 10:43:11 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:54.852 10:43:11 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:54.852 10:43:11 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:54.852 10:43:11 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:54.852 10:43:11 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:54.852 10:43:11 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:54.852 10:43:11 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:54.852 10:43:11 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:54.852 10:43:11 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:54.852 10:43:11 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:54.852 10:43:11 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:04:54.852 10:43:11 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:54.852 10:43:11 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:54.852 10:43:11 -- common/autotest_common.sh@1580 -- # device=0xa80a 00:04:54.852 10:43:11 -- common/autotest_common.sh@1581 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:54.852 10:43:11 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:54.852 10:43:11 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:54.852 10:43:11 -- common/autotest_common.sh@1593 -- # return 0 00:04:54.852 10:43:11 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:54.852 10:43:11 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:54.852 10:43:11 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:54.852 10:43:11 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:54.852 10:43:11 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:54.852 10:43:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:54.852 10:43:11 -- common/autotest_common.sh@10 -- # set +x 00:04:54.852 10:43:11 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:54.852 10:43:11 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:54.852 10:43:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.852 10:43:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.852 10:43:11 -- common/autotest_common.sh@10 -- # set +x 00:04:54.852 ************************************ 00:04:54.852 START TEST env 00:04:54.852 ************************************ 00:04:54.852 10:43:11 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:55.114 * Looking for test storage... 00:04:55.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:55.114 10:43:11 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:55.114 10:43:11 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.114 10:43:11 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.114 10:43:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.114 ************************************ 00:04:55.114 START TEST env_memory 00:04:55.114 ************************************ 00:04:55.114 10:43:11 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:55.114 00:04:55.114 00:04:55.114 CUnit - A unit testing framework for C - Version 2.1-3 00:04:55.114 http://cunit.sourceforge.net/ 00:04:55.114 00:04:55.114 00:04:55.114 Suite: memory 00:04:55.114 Test: alloc and free memory map ...[2024-07-12 10:43:11.951997] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:55.114 passed 00:04:55.114 Test: mem map translation ...[2024-07-12 10:43:11.977746] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:55.114 [2024-07-12 10:43:11.977785] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:55.114 [2024-07-12 10:43:11.977833] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:55.114 [2024-07-12 10:43:11.977840] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:55.114 passed 00:04:55.114 Test: mem map registration ...[2024-07-12 10:43:12.033219] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:55.114 [2024-07-12 10:43:12.033245] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:55.114 passed 00:04:55.376 Test: mem map adjacent registrations ...passed 00:04:55.376 00:04:55.376 Run Summary: Type Total Ran Passed Failed Inactive 00:04:55.376 suites 1 1 n/a 0 0 00:04:55.376 tests 4 4 4 0 0 00:04:55.376 asserts 152 152 152 0 n/a 00:04:55.376 00:04:55.376 Elapsed time = 0.193 seconds 00:04:55.376 00:04:55.376 real 0m0.209s 00:04:55.376 user 0m0.195s 00:04:55.376 sys 0m0.012s 00:04:55.376 10:43:12 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.376 10:43:12 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:55.376 ************************************ 00:04:55.376 END TEST env_memory 00:04:55.376 ************************************ 00:04:55.376 10:43:12 env -- common/autotest_common.sh@1142 -- # return 0 00:04:55.376 10:43:12 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:55.376 10:43:12 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.376 10:43:12 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.376 10:43:12 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.376 ************************************ 00:04:55.376 START TEST env_vtophys 00:04:55.376 ************************************ 00:04:55.376 10:43:12 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:55.376 EAL: lib.eal log level changed from notice to debug 00:04:55.376 EAL: Detected lcore 0 as core 0 on socket 0 00:04:55.376 EAL: Detected lcore 1 as core 1 on socket 0 00:04:55.376 EAL: Detected lcore 2 as core 2 on socket 0 00:04:55.376 EAL: Detected lcore 3 as core 3 on socket 0 00:04:55.376 EAL: Detected lcore 4 as core 4 on socket 0 00:04:55.376 EAL: Detected lcore 5 as core 5 on socket 0 00:04:55.376 EAL: Detected lcore 6 as core 6 on socket 0 00:04:55.376 EAL: Detected lcore 7 as core 7 on socket 0 00:04:55.376 EAL: Detected lcore 8 as core 8 on socket 0 00:04:55.376 EAL: Detected lcore 9 as core 9 on socket 0 00:04:55.376 EAL: Detected lcore 10 as core 10 on socket 0 00:04:55.376 EAL: Detected lcore 11 as core 11 on socket 0 00:04:55.376 EAL: Detected lcore 12 as core 12 on socket 0 00:04:55.376 EAL: Detected lcore 13 as core 13 on socket 0 00:04:55.376 EAL: Detected lcore 14 as core 14 on socket 0 00:04:55.376 EAL: Detected lcore 15 as core 15 on socket 0 00:04:55.376 EAL: Detected lcore 16 as core 16 on socket 0 00:04:55.376 EAL: Detected lcore 17 as core 17 on socket 0 00:04:55.376 EAL: Detected lcore 18 as core 18 on socket 0 00:04:55.376 EAL: Detected lcore 19 as core 19 on socket 0 00:04:55.376 EAL: Detected lcore 20 as core 20 on socket 0 00:04:55.376 EAL: Detected lcore 21 as core 21 on socket 0 00:04:55.376 EAL: Detected lcore 22 as core 22 on socket 0 00:04:55.376 EAL: Detected lcore 23 as core 23 on socket 0 00:04:55.376 EAL: Detected lcore 24 as core 24 on socket 0 00:04:55.376 EAL: Detected lcore 25 as core 25 on socket 0 00:04:55.376 EAL: Detected lcore 26 as core 26 on socket 0 00:04:55.376 EAL: Detected lcore 27 as core 27 on socket 0 00:04:55.376 EAL: Detected lcore 28 as core 28 on socket 0 00:04:55.376 EAL: Detected lcore 29 as core 29 on socket 0 00:04:55.376 EAL: Detected lcore 30 as core 30 on socket 0 00:04:55.376 EAL: Detected lcore 31 as core 31 on socket 0 00:04:55.376 EAL: Detected lcore 32 as core 32 on socket 0 00:04:55.376 EAL: Detected lcore 33 as core 33 on socket 0 00:04:55.376 EAL: Detected lcore 34 as core 34 on socket 0 00:04:55.376 EAL: Detected lcore 35 as core 35 on socket 0 00:04:55.376 EAL: Detected lcore 36 as core 0 on socket 1 00:04:55.376 EAL: Detected lcore 37 as core 1 on socket 1 00:04:55.376 EAL: Detected lcore 38 as core 2 on socket 1 00:04:55.376 EAL: Detected lcore 39 as core 3 on socket 1 00:04:55.376 EAL: Detected lcore 40 as core 4 on socket 1 00:04:55.376 EAL: Detected lcore 41 as core 5 on socket 1 00:04:55.376 EAL: Detected lcore 42 as core 6 on socket 1 00:04:55.376 EAL: Detected lcore 43 as core 7 on socket 1 00:04:55.376 EAL: Detected lcore 44 as core 8 on socket 1 00:04:55.376 EAL: Detected lcore 45 as core 9 on socket 1 00:04:55.376 EAL: Detected lcore 46 as core 10 on socket 1 00:04:55.376 EAL: Detected lcore 47 as core 11 on socket 1 00:04:55.376 EAL: Detected lcore 48 as core 12 on socket 1 00:04:55.376 EAL: Detected lcore 49 as core 13 on socket 1 00:04:55.376 EAL: Detected lcore 50 as core 14 on socket 1 00:04:55.376 EAL: Detected lcore 51 as core 15 on socket 1 00:04:55.376 EAL: Detected lcore 52 as core 16 on socket 1 00:04:55.376 EAL: Detected lcore 53 as core 17 on socket 1 00:04:55.376 EAL: Detected lcore 54 as core 18 on socket 1 00:04:55.376 EAL: Detected lcore 55 as core 19 on socket 1 00:04:55.376 EAL: Detected lcore 56 as core 20 on socket 1 00:04:55.376 EAL: Detected lcore 57 as core 21 on socket 1 00:04:55.376 EAL: Detected lcore 58 as core 22 on socket 1 00:04:55.376 EAL: Detected lcore 59 as core 23 on socket 1 00:04:55.376 EAL: Detected lcore 60 as core 24 on socket 1 00:04:55.376 EAL: Detected lcore 61 as core 25 on socket 1 00:04:55.377 EAL: Detected lcore 62 as core 26 on socket 1 00:04:55.377 EAL: Detected lcore 63 as core 27 on socket 1 00:04:55.377 EAL: Detected lcore 64 as core 28 on socket 1 00:04:55.377 EAL: Detected lcore 65 as core 29 on socket 1 00:04:55.377 EAL: Detected lcore 66 as core 30 on socket 1 00:04:55.377 EAL: Detected lcore 67 as core 31 on socket 1 00:04:55.377 EAL: Detected lcore 68 as core 32 on socket 1 00:04:55.377 EAL: Detected lcore 69 as core 33 on socket 1 00:04:55.377 EAL: Detected lcore 70 as core 34 on socket 1 00:04:55.377 EAL: Detected lcore 71 as core 35 on socket 1 00:04:55.377 EAL: Detected lcore 72 as core 0 on socket 0 00:04:55.377 EAL: Detected lcore 73 as core 1 on socket 0 00:04:55.377 EAL: Detected lcore 74 as core 2 on socket 0 00:04:55.377 EAL: Detected lcore 75 as core 3 on socket 0 00:04:55.377 EAL: Detected lcore 76 as core 4 on socket 0 00:04:55.377 EAL: Detected lcore 77 as core 5 on socket 0 00:04:55.377 EAL: Detected lcore 78 as core 6 on socket 0 00:04:55.377 EAL: Detected lcore 79 as core 7 on socket 0 00:04:55.377 EAL: Detected lcore 80 as core 8 on socket 0 00:04:55.377 EAL: Detected lcore 81 as core 9 on socket 0 00:04:55.377 EAL: Detected lcore 82 as core 10 on socket 0 00:04:55.377 EAL: Detected lcore 83 as core 11 on socket 0 00:04:55.377 EAL: Detected lcore 84 as core 12 on socket 0 00:04:55.377 EAL: Detected lcore 85 as core 13 on socket 0 00:04:55.377 EAL: Detected lcore 86 as core 14 on socket 0 00:04:55.377 EAL: Detected lcore 87 as core 15 on socket 0 00:04:55.377 EAL: Detected lcore 88 as core 16 on socket 0 00:04:55.377 EAL: Detected lcore 89 as core 17 on socket 0 00:04:55.377 EAL: Detected lcore 90 as core 18 on socket 0 00:04:55.377 EAL: Detected lcore 91 as core 19 on socket 0 00:04:55.377 EAL: Detected lcore 92 as core 20 on socket 0 00:04:55.377 EAL: Detected lcore 93 as core 21 on socket 0 00:04:55.377 EAL: Detected lcore 94 as core 22 on socket 0 00:04:55.377 EAL: Detected lcore 95 as core 23 on socket 0 00:04:55.377 EAL: Detected lcore 96 as core 24 on socket 0 00:04:55.377 EAL: Detected lcore 97 as core 25 on socket 0 00:04:55.377 EAL: Detected lcore 98 as core 26 on socket 0 00:04:55.377 EAL: Detected lcore 99 as core 27 on socket 0 00:04:55.377 EAL: Detected lcore 100 as core 28 on socket 0 00:04:55.377 EAL: Detected lcore 101 as core 29 on socket 0 00:04:55.377 EAL: Detected lcore 102 as core 30 on socket 0 00:04:55.377 EAL: Detected lcore 103 as core 31 on socket 0 00:04:55.377 EAL: Detected lcore 104 as core 32 on socket 0 00:04:55.377 EAL: Detected lcore 105 as core 33 on socket 0 00:04:55.377 EAL: Detected lcore 106 as core 34 on socket 0 00:04:55.377 EAL: Detected lcore 107 as core 35 on socket 0 00:04:55.377 EAL: Detected lcore 108 as core 0 on socket 1 00:04:55.377 EAL: Detected lcore 109 as core 1 on socket 1 00:04:55.377 EAL: Detected lcore 110 as core 2 on socket 1 00:04:55.377 EAL: Detected lcore 111 as core 3 on socket 1 00:04:55.377 EAL: Detected lcore 112 as core 4 on socket 1 00:04:55.377 EAL: Detected lcore 113 as core 5 on socket 1 00:04:55.377 EAL: Detected lcore 114 as core 6 on socket 1 00:04:55.377 EAL: Detected lcore 115 as core 7 on socket 1 00:04:55.377 EAL: Detected lcore 116 as core 8 on socket 1 00:04:55.377 EAL: Detected lcore 117 as core 9 on socket 1 00:04:55.377 EAL: Detected lcore 118 as core 10 on socket 1 00:04:55.377 EAL: Detected lcore 119 as core 11 on socket 1 00:04:55.377 EAL: Detected lcore 120 as core 12 on socket 1 00:04:55.377 EAL: Detected lcore 121 as core 13 on socket 1 00:04:55.377 EAL: Detected lcore 122 as core 14 on socket 1 00:04:55.377 EAL: Detected lcore 123 as core 15 on socket 1 00:04:55.377 EAL: Detected lcore 124 as core 16 on socket 1 00:04:55.377 EAL: Detected lcore 125 as core 17 on socket 1 00:04:55.377 EAL: Detected lcore 126 as core 18 on socket 1 00:04:55.377 EAL: Detected lcore 127 as core 19 on socket 1 00:04:55.377 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:55.377 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:55.377 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:55.377 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:55.377 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:55.377 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:55.377 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:55.377 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:55.377 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:55.377 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:55.377 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:55.377 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:55.377 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:55.377 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:55.377 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:55.377 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:55.377 EAL: Maximum logical cores by configuration: 128 00:04:55.377 EAL: Detected CPU lcores: 128 00:04:55.377 EAL: Detected NUMA nodes: 2 00:04:55.377 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:55.377 EAL: Detected shared linkage of DPDK 00:04:55.377 EAL: No shared files mode enabled, IPC will be disabled 00:04:55.377 EAL: Bus pci wants IOVA as 'DC' 00:04:55.377 EAL: Buses did not request a specific IOVA mode. 00:04:55.377 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:55.377 EAL: Selected IOVA mode 'VA' 00:04:55.377 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.377 EAL: Probing VFIO support... 00:04:55.377 EAL: IOMMU type 1 (Type 1) is supported 00:04:55.377 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:55.377 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:55.377 EAL: VFIO support initialized 00:04:55.377 EAL: Ask a virtual area of 0x2e000 bytes 00:04:55.377 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:55.377 EAL: Setting up physically contiguous memory... 00:04:55.377 EAL: Setting maximum number of open files to 524288 00:04:55.377 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:55.377 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:55.377 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:55.377 EAL: Ask a virtual area of 0x61000 bytes 00:04:55.377 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:55.377 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:55.377 EAL: Ask a virtual area of 0x400000000 bytes 00:04:55.377 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:55.377 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:55.377 EAL: Ask a virtual area of 0x61000 bytes 00:04:55.377 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:55.377 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:55.377 EAL: Ask a virtual area of 0x400000000 bytes 00:04:55.377 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:55.377 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:55.377 EAL: Ask a virtual area of 0x61000 bytes 00:04:55.377 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:55.377 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:55.377 EAL: Ask a virtual area of 0x400000000 bytes 00:04:55.377 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:55.377 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:55.377 EAL: Ask a virtual area of 0x61000 bytes 00:04:55.377 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:55.377 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:55.377 EAL: Ask a virtual area of 0x400000000 bytes 00:04:55.377 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:55.377 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:55.377 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:55.377 EAL: Ask a virtual area of 0x61000 bytes 00:04:55.377 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:55.377 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:55.377 EAL: Ask a virtual area of 0x400000000 bytes 00:04:55.377 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:55.377 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:55.377 EAL: Ask a virtual area of 0x61000 bytes 00:04:55.377 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:55.377 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:55.377 EAL: Ask a virtual area of 0x400000000 bytes 00:04:55.377 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:55.377 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:55.377 EAL: Ask a virtual area of 0x61000 bytes 00:04:55.377 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:55.377 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:55.377 EAL: Ask a virtual area of 0x400000000 bytes 00:04:55.377 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:55.377 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:55.377 EAL: Ask a virtual area of 0x61000 bytes 00:04:55.377 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:55.377 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:55.377 EAL: Ask a virtual area of 0x400000000 bytes 00:04:55.377 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:55.377 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:55.377 EAL: Hugepages will be freed exactly as allocated. 00:04:55.377 EAL: No shared files mode enabled, IPC is disabled 00:04:55.377 EAL: No shared files mode enabled, IPC is disabled 00:04:55.377 EAL: TSC frequency is ~2400000 KHz 00:04:55.377 EAL: Main lcore 0 is ready (tid=7fc055dc8a00;cpuset=[0]) 00:04:55.377 EAL: Trying to obtain current memory policy. 00:04:55.377 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.377 EAL: Restoring previous memory policy: 0 00:04:55.377 EAL: request: mp_malloc_sync 00:04:55.377 EAL: No shared files mode enabled, IPC is disabled 00:04:55.377 EAL: Heap on socket 0 was expanded by 2MB 00:04:55.377 EAL: No shared files mode enabled, IPC is disabled 00:04:55.377 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:55.377 EAL: Mem event callback 'spdk:(nil)' registered 00:04:55.377 00:04:55.377 00:04:55.377 CUnit - A unit testing framework for C - Version 2.1-3 00:04:55.377 http://cunit.sourceforge.net/ 00:04:55.377 00:04:55.377 00:04:55.377 Suite: components_suite 00:04:55.377 Test: vtophys_malloc_test ...passed 00:04:55.377 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:55.377 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.377 EAL: Restoring previous memory policy: 4 00:04:55.377 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.377 EAL: request: mp_malloc_sync 00:04:55.377 EAL: No shared files mode enabled, IPC is disabled 00:04:55.377 EAL: Heap on socket 0 was expanded by 4MB 00:04:55.377 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.377 EAL: request: mp_malloc_sync 00:04:55.377 EAL: No shared files mode enabled, IPC is disabled 00:04:55.377 EAL: Heap on socket 0 was shrunk by 4MB 00:04:55.377 EAL: Trying to obtain current memory policy. 00:04:55.377 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.377 EAL: Restoring previous memory policy: 4 00:04:55.377 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.377 EAL: request: mp_malloc_sync 00:04:55.377 EAL: No shared files mode enabled, IPC is disabled 00:04:55.377 EAL: Heap on socket 0 was expanded by 6MB 00:04:55.377 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.377 EAL: request: mp_malloc_sync 00:04:55.377 EAL: No shared files mode enabled, IPC is disabled 00:04:55.378 EAL: Heap on socket 0 was shrunk by 6MB 00:04:55.378 EAL: Trying to obtain current memory policy. 00:04:55.378 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.378 EAL: Restoring previous memory policy: 4 00:04:55.378 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.378 EAL: request: mp_malloc_sync 00:04:55.378 EAL: No shared files mode enabled, IPC is disabled 00:04:55.378 EAL: Heap on socket 0 was expanded by 10MB 00:04:55.378 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.378 EAL: request: mp_malloc_sync 00:04:55.378 EAL: No shared files mode enabled, IPC is disabled 00:04:55.378 EAL: Heap on socket 0 was shrunk by 10MB 00:04:55.378 EAL: Trying to obtain current memory policy. 00:04:55.378 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.378 EAL: Restoring previous memory policy: 4 00:04:55.378 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.378 EAL: request: mp_malloc_sync 00:04:55.378 EAL: No shared files mode enabled, IPC is disabled 00:04:55.378 EAL: Heap on socket 0 was expanded by 18MB 00:04:55.378 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.378 EAL: request: mp_malloc_sync 00:04:55.378 EAL: No shared files mode enabled, IPC is disabled 00:04:55.378 EAL: Heap on socket 0 was shrunk by 18MB 00:04:55.378 EAL: Trying to obtain current memory policy. 00:04:55.378 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.378 EAL: Restoring previous memory policy: 4 00:04:55.378 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.378 EAL: request: mp_malloc_sync 00:04:55.378 EAL: No shared files mode enabled, IPC is disabled 00:04:55.378 EAL: Heap on socket 0 was expanded by 34MB 00:04:55.378 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.378 EAL: request: mp_malloc_sync 00:04:55.378 EAL: No shared files mode enabled, IPC is disabled 00:04:55.378 EAL: Heap on socket 0 was shrunk by 34MB 00:04:55.378 EAL: Trying to obtain current memory policy. 00:04:55.378 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.378 EAL: Restoring previous memory policy: 4 00:04:55.378 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.378 EAL: request: mp_malloc_sync 00:04:55.378 EAL: No shared files mode enabled, IPC is disabled 00:04:55.378 EAL: Heap on socket 0 was expanded by 66MB 00:04:55.378 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.378 EAL: request: mp_malloc_sync 00:04:55.378 EAL: No shared files mode enabled, IPC is disabled 00:04:55.378 EAL: Heap on socket 0 was shrunk by 66MB 00:04:55.378 EAL: Trying to obtain current memory policy. 00:04:55.378 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.639 EAL: Restoring previous memory policy: 4 00:04:55.639 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.639 EAL: request: mp_malloc_sync 00:04:55.639 EAL: No shared files mode enabled, IPC is disabled 00:04:55.639 EAL: Heap on socket 0 was expanded by 130MB 00:04:55.639 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.639 EAL: request: mp_malloc_sync 00:04:55.639 EAL: No shared files mode enabled, IPC is disabled 00:04:55.639 EAL: Heap on socket 0 was shrunk by 130MB 00:04:55.639 EAL: Trying to obtain current memory policy. 00:04:55.639 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.639 EAL: Restoring previous memory policy: 4 00:04:55.639 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.639 EAL: request: mp_malloc_sync 00:04:55.639 EAL: No shared files mode enabled, IPC is disabled 00:04:55.639 EAL: Heap on socket 0 was expanded by 258MB 00:04:55.639 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.639 EAL: request: mp_malloc_sync 00:04:55.639 EAL: No shared files mode enabled, IPC is disabled 00:04:55.639 EAL: Heap on socket 0 was shrunk by 258MB 00:04:55.639 EAL: Trying to obtain current memory policy. 00:04:55.639 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.639 EAL: Restoring previous memory policy: 4 00:04:55.639 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.639 EAL: request: mp_malloc_sync 00:04:55.639 EAL: No shared files mode enabled, IPC is disabled 00:04:55.639 EAL: Heap on socket 0 was expanded by 514MB 00:04:55.639 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.900 EAL: request: mp_malloc_sync 00:04:55.900 EAL: No shared files mode enabled, IPC is disabled 00:04:55.900 EAL: Heap on socket 0 was shrunk by 514MB 00:04:55.900 EAL: Trying to obtain current memory policy. 00:04:55.900 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.900 EAL: Restoring previous memory policy: 4 00:04:55.900 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.900 EAL: request: mp_malloc_sync 00:04:55.900 EAL: No shared files mode enabled, IPC is disabled 00:04:55.900 EAL: Heap on socket 0 was expanded by 1026MB 00:04:56.165 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.165 EAL: request: mp_malloc_sync 00:04:56.165 EAL: No shared files mode enabled, IPC is disabled 00:04:56.165 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:56.165 passed 00:04:56.165 00:04:56.165 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.165 suites 1 1 n/a 0 0 00:04:56.165 tests 2 2 2 0 0 00:04:56.165 asserts 497 497 497 0 n/a 00:04:56.165 00:04:56.165 Elapsed time = 0.685 seconds 00:04:56.165 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.165 EAL: request: mp_malloc_sync 00:04:56.165 EAL: No shared files mode enabled, IPC is disabled 00:04:56.165 EAL: Heap on socket 0 was shrunk by 2MB 00:04:56.165 EAL: No shared files mode enabled, IPC is disabled 00:04:56.165 EAL: No shared files mode enabled, IPC is disabled 00:04:56.165 EAL: No shared files mode enabled, IPC is disabled 00:04:56.165 00:04:56.165 real 0m0.818s 00:04:56.165 user 0m0.428s 00:04:56.165 sys 0m0.368s 00:04:56.165 10:43:13 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.165 10:43:13 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:56.165 ************************************ 00:04:56.165 END TEST env_vtophys 00:04:56.165 ************************************ 00:04:56.165 10:43:13 env -- common/autotest_common.sh@1142 -- # return 0 00:04:56.165 10:43:13 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:56.165 10:43:13 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.166 10:43:13 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.166 10:43:13 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.166 ************************************ 00:04:56.166 START TEST env_pci 00:04:56.166 ************************************ 00:04:56.166 10:43:13 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:56.166 00:04:56.166 00:04:56.166 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.166 http://cunit.sourceforge.net/ 00:04:56.166 00:04:56.166 00:04:56.166 Suite: pci 00:04:56.166 Test: pci_hook ...[2024-07-12 10:43:13.100926] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1874792 has claimed it 00:04:56.166 EAL: Cannot find device (10000:00:01.0) 00:04:56.166 EAL: Failed to attach device on primary process 00:04:56.166 passed 00:04:56.166 00:04:56.166 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.166 suites 1 1 n/a 0 0 00:04:56.166 tests 1 1 1 0 0 00:04:56.166 asserts 25 25 25 0 n/a 00:04:56.166 00:04:56.166 Elapsed time = 0.029 seconds 00:04:56.166 00:04:56.166 real 0m0.049s 00:04:56.167 user 0m0.015s 00:04:56.167 sys 0m0.033s 00:04:56.167 10:43:13 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.167 10:43:13 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:56.167 ************************************ 00:04:56.167 END TEST env_pci 00:04:56.167 ************************************ 00:04:56.428 10:43:13 env -- common/autotest_common.sh@1142 -- # return 0 00:04:56.428 10:43:13 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:56.428 10:43:13 env -- env/env.sh@15 -- # uname 00:04:56.428 10:43:13 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:56.428 10:43:13 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:56.429 10:43:13 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:56.429 10:43:13 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:56.429 10:43:13 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.429 10:43:13 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.429 ************************************ 00:04:56.429 START TEST env_dpdk_post_init 00:04:56.429 ************************************ 00:04:56.429 10:43:13 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:56.429 EAL: Detected CPU lcores: 128 00:04:56.429 EAL: Detected NUMA nodes: 2 00:04:56.429 EAL: Detected shared linkage of DPDK 00:04:56.429 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:56.429 EAL: Selected IOVA mode 'VA' 00:04:56.429 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.429 EAL: VFIO support initialized 00:04:56.429 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:56.429 EAL: Using IOMMU type 1 (Type 1) 00:04:56.689 EAL: Ignore mapping IO port bar(1) 00:04:56.689 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:56.951 EAL: Ignore mapping IO port bar(1) 00:04:56.951 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:56.951 EAL: Ignore mapping IO port bar(1) 00:04:57.212 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:57.212 EAL: Ignore mapping IO port bar(1) 00:04:57.474 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:57.474 EAL: Ignore mapping IO port bar(1) 00:04:57.735 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:57.735 EAL: Ignore mapping IO port bar(1) 00:04:57.735 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:57.996 EAL: Ignore mapping IO port bar(1) 00:04:57.996 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:58.257 EAL: Ignore mapping IO port bar(1) 00:04:58.257 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:58.518 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:58.518 EAL: Ignore mapping IO port bar(1) 00:04:58.779 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:58.779 EAL: Ignore mapping IO port bar(1) 00:04:59.041 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:59.041 EAL: Ignore mapping IO port bar(1) 00:04:59.303 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:59.303 EAL: Ignore mapping IO port bar(1) 00:04:59.303 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:59.564 EAL: Ignore mapping IO port bar(1) 00:04:59.564 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:59.825 EAL: Ignore mapping IO port bar(1) 00:04:59.825 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:00.086 EAL: Ignore mapping IO port bar(1) 00:05:00.087 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:00.087 EAL: Ignore mapping IO port bar(1) 00:05:00.348 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:00.348 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:00.348 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:00.348 Starting DPDK initialization... 00:05:00.348 Starting SPDK post initialization... 00:05:00.348 SPDK NVMe probe 00:05:00.348 Attaching to 0000:65:00.0 00:05:00.348 Attached to 0000:65:00.0 00:05:00.348 Cleaning up... 00:05:02.265 00:05:02.265 real 0m5.738s 00:05:02.265 user 0m0.193s 00:05:02.265 sys 0m0.097s 00:05:02.265 10:43:18 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.265 10:43:18 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:02.265 ************************************ 00:05:02.265 END TEST env_dpdk_post_init 00:05:02.266 ************************************ 00:05:02.266 10:43:18 env -- common/autotest_common.sh@1142 -- # return 0 00:05:02.266 10:43:18 env -- env/env.sh@26 -- # uname 00:05:02.266 10:43:18 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:02.266 10:43:19 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:02.266 10:43:19 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.266 10:43:19 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.266 10:43:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:02.266 ************************************ 00:05:02.266 START TEST env_mem_callbacks 00:05:02.266 ************************************ 00:05:02.266 10:43:19 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:02.266 EAL: Detected CPU lcores: 128 00:05:02.266 EAL: Detected NUMA nodes: 2 00:05:02.266 EAL: Detected shared linkage of DPDK 00:05:02.266 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:02.266 EAL: Selected IOVA mode 'VA' 00:05:02.266 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.266 EAL: VFIO support initialized 00:05:02.266 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:02.266 00:05:02.266 00:05:02.266 CUnit - A unit testing framework for C - Version 2.1-3 00:05:02.266 http://cunit.sourceforge.net/ 00:05:02.266 00:05:02.266 00:05:02.266 Suite: memory 00:05:02.266 Test: test ... 00:05:02.266 register 0x200000200000 2097152 00:05:02.266 malloc 3145728 00:05:02.266 register 0x200000400000 4194304 00:05:02.266 buf 0x200000500000 len 3145728 PASSED 00:05:02.266 malloc 64 00:05:02.266 buf 0x2000004fff40 len 64 PASSED 00:05:02.266 malloc 4194304 00:05:02.266 register 0x200000800000 6291456 00:05:02.266 buf 0x200000a00000 len 4194304 PASSED 00:05:02.266 free 0x200000500000 3145728 00:05:02.266 free 0x2000004fff40 64 00:05:02.266 unregister 0x200000400000 4194304 PASSED 00:05:02.266 free 0x200000a00000 4194304 00:05:02.266 unregister 0x200000800000 6291456 PASSED 00:05:02.266 malloc 8388608 00:05:02.266 register 0x200000400000 10485760 00:05:02.266 buf 0x200000600000 len 8388608 PASSED 00:05:02.266 free 0x200000600000 8388608 00:05:02.266 unregister 0x200000400000 10485760 PASSED 00:05:02.266 passed 00:05:02.266 00:05:02.266 Run Summary: Type Total Ran Passed Failed Inactive 00:05:02.266 suites 1 1 n/a 0 0 00:05:02.266 tests 1 1 1 0 0 00:05:02.266 asserts 15 15 15 0 n/a 00:05:02.266 00:05:02.266 Elapsed time = 0.010 seconds 00:05:02.266 00:05:02.266 real 0m0.069s 00:05:02.266 user 0m0.026s 00:05:02.266 sys 0m0.043s 00:05:02.266 10:43:19 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.266 10:43:19 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:02.266 ************************************ 00:05:02.266 END TEST env_mem_callbacks 00:05:02.266 ************************************ 00:05:02.266 10:43:19 env -- common/autotest_common.sh@1142 -- # return 0 00:05:02.266 00:05:02.266 real 0m7.397s 00:05:02.266 user 0m1.052s 00:05:02.266 sys 0m0.901s 00:05:02.266 10:43:19 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.266 10:43:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:02.266 ************************************ 00:05:02.266 END TEST env 00:05:02.266 ************************************ 00:05:02.266 10:43:19 -- common/autotest_common.sh@1142 -- # return 0 00:05:02.266 10:43:19 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:02.266 10:43:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.266 10:43:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.266 10:43:19 -- common/autotest_common.sh@10 -- # set +x 00:05:02.266 ************************************ 00:05:02.266 START TEST rpc 00:05:02.266 ************************************ 00:05:02.266 10:43:19 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:02.528 * Looking for test storage... 00:05:02.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:02.528 10:43:19 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1876233 00:05:02.528 10:43:19 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.528 10:43:19 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:02.528 10:43:19 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1876233 00:05:02.528 10:43:19 rpc -- common/autotest_common.sh@829 -- # '[' -z 1876233 ']' 00:05:02.528 10:43:19 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.528 10:43:19 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:02.528 10:43:19 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.528 10:43:19 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:02.528 10:43:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.528 [2024-07-12 10:43:19.395184] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:02.528 [2024-07-12 10:43:19.395253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1876233 ] 00:05:02.528 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.528 [2024-07-12 10:43:19.477499] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.789 [2024-07-12 10:43:19.572836] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:02.789 [2024-07-12 10:43:19.572894] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1876233' to capture a snapshot of events at runtime. 00:05:02.789 [2024-07-12 10:43:19.572902] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:02.789 [2024-07-12 10:43:19.572909] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:02.789 [2024-07-12 10:43:19.572915] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1876233 for offline analysis/debug. 00:05:02.789 [2024-07-12 10:43:19.572940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.362 10:43:20 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:03.362 10:43:20 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:03.362 10:43:20 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:03.362 10:43:20 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:03.362 10:43:20 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:03.362 10:43:20 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:03.362 10:43:20 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.362 10:43:20 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.362 10:43:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.362 ************************************ 00:05:03.362 START TEST rpc_integrity 00:05:03.362 ************************************ 00:05:03.362 10:43:20 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:03.362 10:43:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:03.362 10:43:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.362 10:43:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.362 10:43:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.362 10:43:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:03.362 10:43:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:03.362 10:43:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:03.362 10:43:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:03.362 10:43:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.362 10:43:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.362 10:43:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.362 10:43:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:03.362 10:43:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:03.362 10:43:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.362 10:43:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.362 10:43:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.362 10:43:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:03.362 { 00:05:03.362 "name": "Malloc0", 00:05:03.362 "aliases": [ 00:05:03.362 "9b1afa8f-1950-44c1-b8fd-1f5c09fbf296" 00:05:03.362 ], 00:05:03.362 "product_name": "Malloc disk", 00:05:03.362 "block_size": 512, 00:05:03.362 "num_blocks": 16384, 00:05:03.362 "uuid": "9b1afa8f-1950-44c1-b8fd-1f5c09fbf296", 00:05:03.362 "assigned_rate_limits": { 00:05:03.362 "rw_ios_per_sec": 0, 00:05:03.362 "rw_mbytes_per_sec": 0, 00:05:03.362 "r_mbytes_per_sec": 0, 00:05:03.362 "w_mbytes_per_sec": 0 00:05:03.362 }, 00:05:03.362 "claimed": false, 00:05:03.362 "zoned": false, 00:05:03.362 "supported_io_types": { 00:05:03.362 "read": true, 00:05:03.362 "write": true, 00:05:03.362 "unmap": true, 00:05:03.362 "flush": true, 00:05:03.362 "reset": true, 00:05:03.362 "nvme_admin": false, 00:05:03.362 "nvme_io": false, 00:05:03.362 "nvme_io_md": false, 00:05:03.362 "write_zeroes": true, 00:05:03.362 "zcopy": true, 00:05:03.362 "get_zone_info": false, 00:05:03.362 "zone_management": false, 00:05:03.362 "zone_append": false, 00:05:03.362 "compare": false, 00:05:03.362 "compare_and_write": false, 00:05:03.362 "abort": true, 00:05:03.362 "seek_hole": false, 00:05:03.362 "seek_data": false, 00:05:03.362 "copy": true, 00:05:03.362 "nvme_iov_md": false 00:05:03.362 }, 00:05:03.362 "memory_domains": [ 00:05:03.362 { 00:05:03.362 "dma_device_id": "system", 00:05:03.362 "dma_device_type": 1 00:05:03.362 }, 00:05:03.362 { 00:05:03.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.362 "dma_device_type": 2 00:05:03.362 } 00:05:03.362 ], 00:05:03.362 "driver_specific": {} 00:05:03.362 } 00:05:03.362 ]' 00:05:03.362 10:43:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:03.624 10:43:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:03.624 10:43:20 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:03.624 10:43:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.624 10:43:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.624 [2024-07-12 10:43:20.378017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:03.624 [2024-07-12 10:43:20.378067] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:03.624 [2024-07-12 10:43:20.378082] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xcbcd80 00:05:03.624 [2024-07-12 10:43:20.378090] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:03.624 [2024-07-12 10:43:20.379633] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:03.624 [2024-07-12 10:43:20.379670] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:03.624 Passthru0 00:05:03.624 10:43:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.624 10:43:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:03.624 10:43:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.624 10:43:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.624 10:43:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.624 10:43:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:03.624 { 00:05:03.624 "name": "Malloc0", 00:05:03.624 "aliases": [ 00:05:03.624 "9b1afa8f-1950-44c1-b8fd-1f5c09fbf296" 00:05:03.624 ], 00:05:03.624 "product_name": "Malloc disk", 00:05:03.624 "block_size": 512, 00:05:03.624 "num_blocks": 16384, 00:05:03.624 "uuid": "9b1afa8f-1950-44c1-b8fd-1f5c09fbf296", 00:05:03.624 "assigned_rate_limits": { 00:05:03.624 "rw_ios_per_sec": 0, 00:05:03.624 "rw_mbytes_per_sec": 0, 00:05:03.624 "r_mbytes_per_sec": 0, 00:05:03.624 "w_mbytes_per_sec": 0 00:05:03.624 }, 00:05:03.624 "claimed": true, 00:05:03.624 "claim_type": "exclusive_write", 00:05:03.624 "zoned": false, 00:05:03.624 "supported_io_types": { 00:05:03.624 "read": true, 00:05:03.624 "write": true, 00:05:03.624 "unmap": true, 00:05:03.624 "flush": true, 00:05:03.624 "reset": true, 00:05:03.624 "nvme_admin": false, 00:05:03.624 "nvme_io": false, 00:05:03.624 "nvme_io_md": false, 00:05:03.624 "write_zeroes": true, 00:05:03.624 "zcopy": true, 00:05:03.624 "get_zone_info": false, 00:05:03.624 "zone_management": false, 00:05:03.624 "zone_append": false, 00:05:03.624 "compare": false, 00:05:03.624 "compare_and_write": false, 00:05:03.624 "abort": true, 00:05:03.624 "seek_hole": false, 00:05:03.624 "seek_data": false, 00:05:03.624 "copy": true, 00:05:03.624 "nvme_iov_md": false 00:05:03.624 }, 00:05:03.624 "memory_domains": [ 00:05:03.624 { 00:05:03.624 "dma_device_id": "system", 00:05:03.624 "dma_device_type": 1 00:05:03.624 }, 00:05:03.624 { 00:05:03.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.624 "dma_device_type": 2 00:05:03.624 } 00:05:03.624 ], 00:05:03.624 "driver_specific": {} 00:05:03.624 }, 00:05:03.624 { 00:05:03.624 "name": "Passthru0", 00:05:03.624 "aliases": [ 00:05:03.624 "911cc880-53d7-52f8-9dc5-ef9b8339ba42" 00:05:03.624 ], 00:05:03.624 "product_name": "passthru", 00:05:03.624 "block_size": 512, 00:05:03.624 "num_blocks": 16384, 00:05:03.624 "uuid": "911cc880-53d7-52f8-9dc5-ef9b8339ba42", 00:05:03.624 "assigned_rate_limits": { 00:05:03.624 "rw_ios_per_sec": 0, 00:05:03.624 "rw_mbytes_per_sec": 0, 00:05:03.624 "r_mbytes_per_sec": 0, 00:05:03.624 "w_mbytes_per_sec": 0 00:05:03.624 }, 00:05:03.624 "claimed": false, 00:05:03.624 "zoned": false, 00:05:03.624 "supported_io_types": { 00:05:03.624 "read": true, 00:05:03.624 "write": true, 00:05:03.624 "unmap": true, 00:05:03.624 "flush": true, 00:05:03.624 "reset": true, 00:05:03.624 "nvme_admin": false, 00:05:03.624 "nvme_io": false, 00:05:03.624 "nvme_io_md": false, 00:05:03.624 "write_zeroes": true, 00:05:03.624 "zcopy": true, 00:05:03.624 "get_zone_info": false, 00:05:03.624 "zone_management": false, 00:05:03.624 "zone_append": false, 00:05:03.624 "compare": false, 00:05:03.624 "compare_and_write": false, 00:05:03.624 "abort": true, 00:05:03.624 "seek_hole": false, 00:05:03.624 "seek_data": false, 00:05:03.624 "copy": true, 00:05:03.624 "nvme_iov_md": false 00:05:03.624 }, 00:05:03.624 "memory_domains": [ 00:05:03.624 { 00:05:03.624 "dma_device_id": "system", 00:05:03.624 "dma_device_type": 1 00:05:03.624 }, 00:05:03.624 { 00:05:03.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.624 "dma_device_type": 2 00:05:03.624 } 00:05:03.624 ], 00:05:03.624 "driver_specific": { 00:05:03.624 "passthru": { 00:05:03.624 "name": "Passthru0", 00:05:03.624 "base_bdev_name": "Malloc0" 00:05:03.624 } 00:05:03.624 } 00:05:03.624 } 00:05:03.624 ]' 00:05:03.624 10:43:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:03.624 10:43:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:03.624 10:43:20 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:03.624 10:43:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.624 10:43:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.624 10:43:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.624 10:43:20 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:03.624 10:43:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.624 10:43:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.624 10:43:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.624 10:43:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:03.624 10:43:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.624 10:43:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.624 10:43:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.624 10:43:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:03.624 10:43:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:03.624 10:43:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:03.624 00:05:03.624 real 0m0.299s 00:05:03.624 user 0m0.186s 00:05:03.624 sys 0m0.043s 00:05:03.624 10:43:20 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.624 10:43:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.624 ************************************ 00:05:03.624 END TEST rpc_integrity 00:05:03.624 ************************************ 00:05:03.624 10:43:20 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:03.624 10:43:20 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:03.624 10:43:20 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.625 10:43:20 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.625 10:43:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.887 ************************************ 00:05:03.887 START TEST rpc_plugins 00:05:03.887 ************************************ 00:05:03.887 10:43:20 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:03.887 10:43:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:03.887 10:43:20 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.887 10:43:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.887 10:43:20 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.887 10:43:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:03.887 10:43:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:03.887 10:43:20 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.887 10:43:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.887 10:43:20 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.887 10:43:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:03.887 { 00:05:03.887 "name": "Malloc1", 00:05:03.887 "aliases": [ 00:05:03.887 "206c9a7f-0a8b-4eab-8848-d297e8937379" 00:05:03.887 ], 00:05:03.887 "product_name": "Malloc disk", 00:05:03.887 "block_size": 4096, 00:05:03.887 "num_blocks": 256, 00:05:03.887 "uuid": "206c9a7f-0a8b-4eab-8848-d297e8937379", 00:05:03.887 "assigned_rate_limits": { 00:05:03.887 "rw_ios_per_sec": 0, 00:05:03.887 "rw_mbytes_per_sec": 0, 00:05:03.887 "r_mbytes_per_sec": 0, 00:05:03.887 "w_mbytes_per_sec": 0 00:05:03.887 }, 00:05:03.887 "claimed": false, 00:05:03.887 "zoned": false, 00:05:03.887 "supported_io_types": { 00:05:03.887 "read": true, 00:05:03.887 "write": true, 00:05:03.887 "unmap": true, 00:05:03.887 "flush": true, 00:05:03.887 "reset": true, 00:05:03.887 "nvme_admin": false, 00:05:03.887 "nvme_io": false, 00:05:03.887 "nvme_io_md": false, 00:05:03.887 "write_zeroes": true, 00:05:03.887 "zcopy": true, 00:05:03.887 "get_zone_info": false, 00:05:03.887 "zone_management": false, 00:05:03.887 "zone_append": false, 00:05:03.887 "compare": false, 00:05:03.887 "compare_and_write": false, 00:05:03.887 "abort": true, 00:05:03.887 "seek_hole": false, 00:05:03.887 "seek_data": false, 00:05:03.887 "copy": true, 00:05:03.887 "nvme_iov_md": false 00:05:03.887 }, 00:05:03.887 "memory_domains": [ 00:05:03.887 { 00:05:03.887 "dma_device_id": "system", 00:05:03.887 "dma_device_type": 1 00:05:03.887 }, 00:05:03.887 { 00:05:03.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.887 "dma_device_type": 2 00:05:03.887 } 00:05:03.887 ], 00:05:03.887 "driver_specific": {} 00:05:03.887 } 00:05:03.887 ]' 00:05:03.887 10:43:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:03.887 10:43:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:03.887 10:43:20 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:03.887 10:43:20 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.887 10:43:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.887 10:43:20 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.887 10:43:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:03.887 10:43:20 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.887 10:43:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.887 10:43:20 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.887 10:43:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:03.887 10:43:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:03.887 10:43:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:03.887 00:05:03.887 real 0m0.152s 00:05:03.887 user 0m0.096s 00:05:03.887 sys 0m0.019s 00:05:03.887 10:43:20 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.887 10:43:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.887 ************************************ 00:05:03.887 END TEST rpc_plugins 00:05:03.887 ************************************ 00:05:03.887 10:43:20 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:03.887 10:43:20 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:03.887 10:43:20 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.887 10:43:20 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.887 10:43:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.887 ************************************ 00:05:03.887 START TEST rpc_trace_cmd_test 00:05:03.887 ************************************ 00:05:03.887 10:43:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:03.887 10:43:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:03.887 10:43:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:03.887 10:43:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.887 10:43:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:03.887 10:43:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.887 10:43:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:03.887 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1876233", 00:05:03.887 "tpoint_group_mask": "0x8", 00:05:03.887 "iscsi_conn": { 00:05:03.887 "mask": "0x2", 00:05:03.887 "tpoint_mask": "0x0" 00:05:03.887 }, 00:05:03.887 "scsi": { 00:05:03.887 "mask": "0x4", 00:05:03.887 "tpoint_mask": "0x0" 00:05:03.887 }, 00:05:03.887 "bdev": { 00:05:03.887 "mask": "0x8", 00:05:03.887 "tpoint_mask": "0xffffffffffffffff" 00:05:03.887 }, 00:05:03.887 "nvmf_rdma": { 00:05:03.887 "mask": "0x10", 00:05:03.887 "tpoint_mask": "0x0" 00:05:03.887 }, 00:05:03.887 "nvmf_tcp": { 00:05:03.887 "mask": "0x20", 00:05:03.887 "tpoint_mask": "0x0" 00:05:03.887 }, 00:05:03.887 "ftl": { 00:05:03.887 "mask": "0x40", 00:05:03.887 "tpoint_mask": "0x0" 00:05:03.887 }, 00:05:03.887 "blobfs": { 00:05:03.887 "mask": "0x80", 00:05:03.887 "tpoint_mask": "0x0" 00:05:03.887 }, 00:05:03.887 "dsa": { 00:05:03.887 "mask": "0x200", 00:05:03.887 "tpoint_mask": "0x0" 00:05:03.887 }, 00:05:03.887 "thread": { 00:05:03.887 "mask": "0x400", 00:05:03.887 "tpoint_mask": "0x0" 00:05:03.887 }, 00:05:03.887 "nvme_pcie": { 00:05:03.887 "mask": "0x800", 00:05:03.887 "tpoint_mask": "0x0" 00:05:03.887 }, 00:05:03.887 "iaa": { 00:05:03.887 "mask": "0x1000", 00:05:03.887 "tpoint_mask": "0x0" 00:05:03.887 }, 00:05:03.887 "nvme_tcp": { 00:05:03.887 "mask": "0x2000", 00:05:03.887 "tpoint_mask": "0x0" 00:05:03.887 }, 00:05:03.887 "bdev_nvme": { 00:05:03.887 "mask": "0x4000", 00:05:03.887 "tpoint_mask": "0x0" 00:05:03.887 }, 00:05:03.887 "sock": { 00:05:03.887 "mask": "0x8000", 00:05:03.887 "tpoint_mask": "0x0" 00:05:03.887 } 00:05:03.887 }' 00:05:03.887 10:43:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:04.147 10:43:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:04.147 10:43:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:04.147 10:43:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:04.147 10:43:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:04.147 10:43:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:04.147 10:43:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:04.147 10:43:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:04.147 10:43:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:04.147 10:43:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:04.147 00:05:04.147 real 0m0.248s 00:05:04.147 user 0m0.210s 00:05:04.147 sys 0m0.027s 00:05:04.147 10:43:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.147 10:43:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:04.147 ************************************ 00:05:04.147 END TEST rpc_trace_cmd_test 00:05:04.147 ************************************ 00:05:04.408 10:43:21 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:04.408 10:43:21 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:04.408 10:43:21 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:04.408 10:43:21 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:04.408 10:43:21 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.408 10:43:21 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.408 10:43:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.408 ************************************ 00:05:04.408 START TEST rpc_daemon_integrity 00:05:04.408 ************************************ 00:05:04.408 10:43:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:04.408 10:43:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:04.408 10:43:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.408 10:43:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.408 10:43:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.408 10:43:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:04.408 10:43:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:04.408 10:43:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:04.408 10:43:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:04.408 10:43:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.408 10:43:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.408 10:43:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.408 10:43:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:04.408 10:43:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:04.408 10:43:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.408 10:43:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.408 10:43:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.408 10:43:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:04.408 { 00:05:04.408 "name": "Malloc2", 00:05:04.408 "aliases": [ 00:05:04.408 "4cab8db1-ac8c-47a4-92a7-e656626ae07f" 00:05:04.408 ], 00:05:04.408 "product_name": "Malloc disk", 00:05:04.408 "block_size": 512, 00:05:04.408 "num_blocks": 16384, 00:05:04.408 "uuid": "4cab8db1-ac8c-47a4-92a7-e656626ae07f", 00:05:04.408 "assigned_rate_limits": { 00:05:04.408 "rw_ios_per_sec": 0, 00:05:04.408 "rw_mbytes_per_sec": 0, 00:05:04.408 "r_mbytes_per_sec": 0, 00:05:04.408 "w_mbytes_per_sec": 0 00:05:04.408 }, 00:05:04.408 "claimed": false, 00:05:04.408 "zoned": false, 00:05:04.408 "supported_io_types": { 00:05:04.408 "read": true, 00:05:04.408 "write": true, 00:05:04.408 "unmap": true, 00:05:04.408 "flush": true, 00:05:04.408 "reset": true, 00:05:04.408 "nvme_admin": false, 00:05:04.408 "nvme_io": false, 00:05:04.408 "nvme_io_md": false, 00:05:04.408 "write_zeroes": true, 00:05:04.408 "zcopy": true, 00:05:04.408 "get_zone_info": false, 00:05:04.408 "zone_management": false, 00:05:04.408 "zone_append": false, 00:05:04.408 "compare": false, 00:05:04.408 "compare_and_write": false, 00:05:04.408 "abort": true, 00:05:04.408 "seek_hole": false, 00:05:04.408 "seek_data": false, 00:05:04.408 "copy": true, 00:05:04.408 "nvme_iov_md": false 00:05:04.408 }, 00:05:04.408 "memory_domains": [ 00:05:04.408 { 00:05:04.408 "dma_device_id": "system", 00:05:04.408 "dma_device_type": 1 00:05:04.408 }, 00:05:04.408 { 00:05:04.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.408 "dma_device_type": 2 00:05:04.408 } 00:05:04.408 ], 00:05:04.408 "driver_specific": {} 00:05:04.408 } 00:05:04.408 ]' 00:05:04.408 10:43:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:04.408 10:43:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:04.408 10:43:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:04.408 10:43:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.408 10:43:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.408 [2024-07-12 10:43:21.316573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:04.408 [2024-07-12 10:43:21.316619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:04.408 [2024-07-12 10:43:21.316635] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xcbda90 00:05:04.408 [2024-07-12 10:43:21.316643] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:04.408 [2024-07-12 10:43:21.318029] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:04.408 [2024-07-12 10:43:21.318064] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:04.408 Passthru0 00:05:04.408 10:43:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.408 10:43:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:04.408 10:43:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.408 10:43:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.408 10:43:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.408 10:43:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:04.408 { 00:05:04.408 "name": "Malloc2", 00:05:04.408 "aliases": [ 00:05:04.408 "4cab8db1-ac8c-47a4-92a7-e656626ae07f" 00:05:04.408 ], 00:05:04.408 "product_name": "Malloc disk", 00:05:04.408 "block_size": 512, 00:05:04.408 "num_blocks": 16384, 00:05:04.408 "uuid": "4cab8db1-ac8c-47a4-92a7-e656626ae07f", 00:05:04.408 "assigned_rate_limits": { 00:05:04.408 "rw_ios_per_sec": 0, 00:05:04.408 "rw_mbytes_per_sec": 0, 00:05:04.408 "r_mbytes_per_sec": 0, 00:05:04.408 "w_mbytes_per_sec": 0 00:05:04.408 }, 00:05:04.408 "claimed": true, 00:05:04.408 "claim_type": "exclusive_write", 00:05:04.408 "zoned": false, 00:05:04.408 "supported_io_types": { 00:05:04.408 "read": true, 00:05:04.408 "write": true, 00:05:04.408 "unmap": true, 00:05:04.408 "flush": true, 00:05:04.408 "reset": true, 00:05:04.408 "nvme_admin": false, 00:05:04.408 "nvme_io": false, 00:05:04.408 "nvme_io_md": false, 00:05:04.408 "write_zeroes": true, 00:05:04.408 "zcopy": true, 00:05:04.408 "get_zone_info": false, 00:05:04.408 "zone_management": false, 00:05:04.408 "zone_append": false, 00:05:04.408 "compare": false, 00:05:04.408 "compare_and_write": false, 00:05:04.408 "abort": true, 00:05:04.408 "seek_hole": false, 00:05:04.408 "seek_data": false, 00:05:04.408 "copy": true, 00:05:04.408 "nvme_iov_md": false 00:05:04.408 }, 00:05:04.408 "memory_domains": [ 00:05:04.408 { 00:05:04.408 "dma_device_id": "system", 00:05:04.408 "dma_device_type": 1 00:05:04.408 }, 00:05:04.408 { 00:05:04.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.408 "dma_device_type": 2 00:05:04.408 } 00:05:04.408 ], 00:05:04.408 "driver_specific": {} 00:05:04.408 }, 00:05:04.408 { 00:05:04.408 "name": "Passthru0", 00:05:04.408 "aliases": [ 00:05:04.408 "85a3ff18-fcfe-5813-ba55-34cf38eef45c" 00:05:04.408 ], 00:05:04.408 "product_name": "passthru", 00:05:04.408 "block_size": 512, 00:05:04.408 "num_blocks": 16384, 00:05:04.408 "uuid": "85a3ff18-fcfe-5813-ba55-34cf38eef45c", 00:05:04.408 "assigned_rate_limits": { 00:05:04.408 "rw_ios_per_sec": 0, 00:05:04.408 "rw_mbytes_per_sec": 0, 00:05:04.408 "r_mbytes_per_sec": 0, 00:05:04.408 "w_mbytes_per_sec": 0 00:05:04.408 }, 00:05:04.408 "claimed": false, 00:05:04.408 "zoned": false, 00:05:04.408 "supported_io_types": { 00:05:04.408 "read": true, 00:05:04.408 "write": true, 00:05:04.408 "unmap": true, 00:05:04.408 "flush": true, 00:05:04.408 "reset": true, 00:05:04.408 "nvme_admin": false, 00:05:04.408 "nvme_io": false, 00:05:04.408 "nvme_io_md": false, 00:05:04.408 "write_zeroes": true, 00:05:04.408 "zcopy": true, 00:05:04.408 "get_zone_info": false, 00:05:04.408 "zone_management": false, 00:05:04.408 "zone_append": false, 00:05:04.408 "compare": false, 00:05:04.408 "compare_and_write": false, 00:05:04.408 "abort": true, 00:05:04.408 "seek_hole": false, 00:05:04.408 "seek_data": false, 00:05:04.408 "copy": true, 00:05:04.408 "nvme_iov_md": false 00:05:04.408 }, 00:05:04.408 "memory_domains": [ 00:05:04.408 { 00:05:04.408 "dma_device_id": "system", 00:05:04.408 "dma_device_type": 1 00:05:04.408 }, 00:05:04.408 { 00:05:04.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.408 "dma_device_type": 2 00:05:04.408 } 00:05:04.408 ], 00:05:04.408 "driver_specific": { 00:05:04.408 "passthru": { 00:05:04.408 "name": "Passthru0", 00:05:04.408 "base_bdev_name": "Malloc2" 00:05:04.408 } 00:05:04.408 } 00:05:04.408 } 00:05:04.408 ]' 00:05:04.408 10:43:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:04.669 10:43:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:04.669 10:43:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:04.669 10:43:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.669 10:43:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.669 10:43:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.669 10:43:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:04.669 10:43:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.669 10:43:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.669 10:43:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.669 10:43:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:04.669 10:43:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.669 10:43:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.669 10:43:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.669 10:43:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:04.669 10:43:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:04.669 10:43:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:04.669 00:05:04.669 real 0m0.298s 00:05:04.669 user 0m0.188s 00:05:04.669 sys 0m0.050s 00:05:04.669 10:43:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.669 10:43:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.669 ************************************ 00:05:04.669 END TEST rpc_daemon_integrity 00:05:04.669 ************************************ 00:05:04.669 10:43:21 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:04.669 10:43:21 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:04.669 10:43:21 rpc -- rpc/rpc.sh@84 -- # killprocess 1876233 00:05:04.669 10:43:21 rpc -- common/autotest_common.sh@948 -- # '[' -z 1876233 ']' 00:05:04.669 10:43:21 rpc -- common/autotest_common.sh@952 -- # kill -0 1876233 00:05:04.669 10:43:21 rpc -- common/autotest_common.sh@953 -- # uname 00:05:04.669 10:43:21 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:04.669 10:43:21 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1876233 00:05:04.669 10:43:21 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:04.669 10:43:21 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:04.669 10:43:21 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1876233' 00:05:04.669 killing process with pid 1876233 00:05:04.669 10:43:21 rpc -- common/autotest_common.sh@967 -- # kill 1876233 00:05:04.669 10:43:21 rpc -- common/autotest_common.sh@972 -- # wait 1876233 00:05:04.930 00:05:04.930 real 0m2.579s 00:05:04.930 user 0m3.332s 00:05:04.930 sys 0m0.791s 00:05:04.930 10:43:21 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.930 10:43:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.930 ************************************ 00:05:04.930 END TEST rpc 00:05:04.930 ************************************ 00:05:04.930 10:43:21 -- common/autotest_common.sh@1142 -- # return 0 00:05:04.930 10:43:21 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:04.930 10:43:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.930 10:43:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.930 10:43:21 -- common/autotest_common.sh@10 -- # set +x 00:05:04.930 ************************************ 00:05:04.930 START TEST skip_rpc 00:05:04.930 ************************************ 00:05:04.930 10:43:21 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:05.191 * Looking for test storage... 00:05:05.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:05.191 10:43:21 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:05.191 10:43:21 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:05.191 10:43:21 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:05.191 10:43:21 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.191 10:43:21 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.191 10:43:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.191 ************************************ 00:05:05.191 START TEST skip_rpc 00:05:05.191 ************************************ 00:05:05.191 10:43:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:05.191 10:43:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1876809 00:05:05.191 10:43:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.191 10:43:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:05.191 10:43:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:05.191 [2024-07-12 10:43:22.091313] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:05.191 [2024-07-12 10:43:22.091372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1876809 ] 00:05:05.191 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.191 [2024-07-12 10:43:22.172032] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.452 [2024-07-12 10:43:22.270948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.767 10:43:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:10.767 10:43:27 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:10.767 10:43:27 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:10.767 10:43:27 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:10.767 10:43:27 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:10.767 10:43:27 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:10.767 10:43:27 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:10.767 10:43:27 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:10.767 10:43:27 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.767 10:43:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.767 10:43:27 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:10.767 10:43:27 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:10.767 10:43:27 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:10.767 10:43:27 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:10.767 10:43:27 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:10.767 10:43:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:10.767 10:43:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1876809 00:05:10.767 10:43:27 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 1876809 ']' 00:05:10.767 10:43:27 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 1876809 00:05:10.767 10:43:27 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:10.767 10:43:27 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:10.767 10:43:27 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1876809 00:05:10.767 10:43:27 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:10.767 10:43:27 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:10.767 10:43:27 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1876809' 00:05:10.767 killing process with pid 1876809 00:05:10.767 10:43:27 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 1876809 00:05:10.767 10:43:27 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 1876809 00:05:10.767 00:05:10.767 real 0m5.255s 00:05:10.767 user 0m4.990s 00:05:10.767 sys 0m0.292s 00:05:10.767 10:43:27 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.767 10:43:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.767 ************************************ 00:05:10.767 END TEST skip_rpc 00:05:10.767 ************************************ 00:05:10.767 10:43:27 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:10.767 10:43:27 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:10.767 10:43:27 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.767 10:43:27 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.767 10:43:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.767 ************************************ 00:05:10.767 START TEST skip_rpc_with_json 00:05:10.767 ************************************ 00:05:10.767 10:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:10.767 10:43:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:10.767 10:43:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1877999 00:05:10.767 10:43:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.767 10:43:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1877999 00:05:10.767 10:43:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.767 10:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 1877999 ']' 00:05:10.767 10:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.767 10:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.767 10:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.767 10:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.767 10:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:10.767 [2024-07-12 10:43:27.418511] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:10.767 [2024-07-12 10:43:27.418565] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1877999 ] 00:05:10.767 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.767 [2024-07-12 10:43:27.497583] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.767 [2024-07-12 10:43:27.566227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.338 10:43:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.338 10:43:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:11.338 10:43:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:11.338 10:43:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.338 10:43:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:11.338 [2024-07-12 10:43:28.197483] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:11.338 request: 00:05:11.338 { 00:05:11.338 "trtype": "tcp", 00:05:11.338 "method": "nvmf_get_transports", 00:05:11.338 "req_id": 1 00:05:11.338 } 00:05:11.338 Got JSON-RPC error response 00:05:11.338 response: 00:05:11.338 { 00:05:11.338 "code": -19, 00:05:11.338 "message": "No such device" 00:05:11.338 } 00:05:11.338 10:43:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:11.338 10:43:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:11.338 10:43:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.338 10:43:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:11.338 [2024-07-12 10:43:28.209578] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:11.338 10:43:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.338 10:43:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:11.338 10:43:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.338 10:43:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:11.599 10:43:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.599 10:43:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:11.599 { 00:05:11.599 "subsystems": [ 00:05:11.599 { 00:05:11.599 "subsystem": "vfio_user_target", 00:05:11.599 "config": null 00:05:11.599 }, 00:05:11.599 { 00:05:11.599 "subsystem": "keyring", 00:05:11.599 "config": [] 00:05:11.599 }, 00:05:11.599 { 00:05:11.599 "subsystem": "iobuf", 00:05:11.599 "config": [ 00:05:11.599 { 00:05:11.599 "method": "iobuf_set_options", 00:05:11.599 "params": { 00:05:11.599 "small_pool_count": 8192, 00:05:11.599 "large_pool_count": 1024, 00:05:11.599 "small_bufsize": 8192, 00:05:11.599 "large_bufsize": 135168 00:05:11.599 } 00:05:11.599 } 00:05:11.599 ] 00:05:11.599 }, 00:05:11.599 { 00:05:11.599 "subsystem": "sock", 00:05:11.599 "config": [ 00:05:11.599 { 00:05:11.599 "method": "sock_set_default_impl", 00:05:11.599 "params": { 00:05:11.599 "impl_name": "posix" 00:05:11.599 } 00:05:11.599 }, 00:05:11.599 { 00:05:11.599 "method": "sock_impl_set_options", 00:05:11.599 "params": { 00:05:11.599 "impl_name": "ssl", 00:05:11.599 "recv_buf_size": 4096, 00:05:11.599 "send_buf_size": 4096, 00:05:11.599 "enable_recv_pipe": true, 00:05:11.599 "enable_quickack": false, 00:05:11.599 "enable_placement_id": 0, 00:05:11.599 "enable_zerocopy_send_server": true, 00:05:11.599 "enable_zerocopy_send_client": false, 00:05:11.599 "zerocopy_threshold": 0, 00:05:11.599 "tls_version": 0, 00:05:11.599 "enable_ktls": false 00:05:11.599 } 00:05:11.599 }, 00:05:11.599 { 00:05:11.599 "method": "sock_impl_set_options", 00:05:11.599 "params": { 00:05:11.599 "impl_name": "posix", 00:05:11.599 "recv_buf_size": 2097152, 00:05:11.599 "send_buf_size": 2097152, 00:05:11.599 "enable_recv_pipe": true, 00:05:11.599 "enable_quickack": false, 00:05:11.599 "enable_placement_id": 0, 00:05:11.599 "enable_zerocopy_send_server": true, 00:05:11.599 "enable_zerocopy_send_client": false, 00:05:11.599 "zerocopy_threshold": 0, 00:05:11.599 "tls_version": 0, 00:05:11.599 "enable_ktls": false 00:05:11.599 } 00:05:11.599 } 00:05:11.599 ] 00:05:11.599 }, 00:05:11.599 { 00:05:11.599 "subsystem": "vmd", 00:05:11.599 "config": [] 00:05:11.599 }, 00:05:11.599 { 00:05:11.599 "subsystem": "accel", 00:05:11.599 "config": [ 00:05:11.599 { 00:05:11.599 "method": "accel_set_options", 00:05:11.599 "params": { 00:05:11.599 "small_cache_size": 128, 00:05:11.599 "large_cache_size": 16, 00:05:11.599 "task_count": 2048, 00:05:11.599 "sequence_count": 2048, 00:05:11.599 "buf_count": 2048 00:05:11.599 } 00:05:11.599 } 00:05:11.599 ] 00:05:11.599 }, 00:05:11.599 { 00:05:11.599 "subsystem": "bdev", 00:05:11.599 "config": [ 00:05:11.599 { 00:05:11.599 "method": "bdev_set_options", 00:05:11.599 "params": { 00:05:11.599 "bdev_io_pool_size": 65535, 00:05:11.599 "bdev_io_cache_size": 256, 00:05:11.599 "bdev_auto_examine": true, 00:05:11.599 "iobuf_small_cache_size": 128, 00:05:11.599 "iobuf_large_cache_size": 16 00:05:11.599 } 00:05:11.599 }, 00:05:11.599 { 00:05:11.599 "method": "bdev_raid_set_options", 00:05:11.599 "params": { 00:05:11.599 "process_window_size_kb": 1024 00:05:11.599 } 00:05:11.599 }, 00:05:11.599 { 00:05:11.599 "method": "bdev_iscsi_set_options", 00:05:11.599 "params": { 00:05:11.599 "timeout_sec": 30 00:05:11.599 } 00:05:11.599 }, 00:05:11.599 { 00:05:11.599 "method": "bdev_nvme_set_options", 00:05:11.599 "params": { 00:05:11.599 "action_on_timeout": "none", 00:05:11.599 "timeout_us": 0, 00:05:11.599 "timeout_admin_us": 0, 00:05:11.599 "keep_alive_timeout_ms": 10000, 00:05:11.599 "arbitration_burst": 0, 00:05:11.599 "low_priority_weight": 0, 00:05:11.599 "medium_priority_weight": 0, 00:05:11.599 "high_priority_weight": 0, 00:05:11.599 "nvme_adminq_poll_period_us": 10000, 00:05:11.599 "nvme_ioq_poll_period_us": 0, 00:05:11.599 "io_queue_requests": 0, 00:05:11.599 "delay_cmd_submit": true, 00:05:11.599 "transport_retry_count": 4, 00:05:11.599 "bdev_retry_count": 3, 00:05:11.599 "transport_ack_timeout": 0, 00:05:11.599 "ctrlr_loss_timeout_sec": 0, 00:05:11.599 "reconnect_delay_sec": 0, 00:05:11.599 "fast_io_fail_timeout_sec": 0, 00:05:11.599 "disable_auto_failback": false, 00:05:11.599 "generate_uuids": false, 00:05:11.599 "transport_tos": 0, 00:05:11.599 "nvme_error_stat": false, 00:05:11.599 "rdma_srq_size": 0, 00:05:11.599 "io_path_stat": false, 00:05:11.599 "allow_accel_sequence": false, 00:05:11.599 "rdma_max_cq_size": 0, 00:05:11.599 "rdma_cm_event_timeout_ms": 0, 00:05:11.599 "dhchap_digests": [ 00:05:11.599 "sha256", 00:05:11.599 "sha384", 00:05:11.599 "sha512" 00:05:11.599 ], 00:05:11.599 "dhchap_dhgroups": [ 00:05:11.599 "null", 00:05:11.599 "ffdhe2048", 00:05:11.599 "ffdhe3072", 00:05:11.599 "ffdhe4096", 00:05:11.599 "ffdhe6144", 00:05:11.599 "ffdhe8192" 00:05:11.599 ] 00:05:11.599 } 00:05:11.599 }, 00:05:11.599 { 00:05:11.599 "method": "bdev_nvme_set_hotplug", 00:05:11.599 "params": { 00:05:11.599 "period_us": 100000, 00:05:11.599 "enable": false 00:05:11.599 } 00:05:11.599 }, 00:05:11.599 { 00:05:11.599 "method": "bdev_wait_for_examine" 00:05:11.599 } 00:05:11.599 ] 00:05:11.599 }, 00:05:11.599 { 00:05:11.599 "subsystem": "scsi", 00:05:11.599 "config": null 00:05:11.599 }, 00:05:11.599 { 00:05:11.599 "subsystem": "scheduler", 00:05:11.599 "config": [ 00:05:11.599 { 00:05:11.599 "method": "framework_set_scheduler", 00:05:11.599 "params": { 00:05:11.599 "name": "static" 00:05:11.599 } 00:05:11.599 } 00:05:11.599 ] 00:05:11.599 }, 00:05:11.599 { 00:05:11.599 "subsystem": "vhost_scsi", 00:05:11.599 "config": [] 00:05:11.599 }, 00:05:11.599 { 00:05:11.599 "subsystem": "vhost_blk", 00:05:11.599 "config": [] 00:05:11.599 }, 00:05:11.599 { 00:05:11.599 "subsystem": "ublk", 00:05:11.599 "config": [] 00:05:11.599 }, 00:05:11.599 { 00:05:11.599 "subsystem": "nbd", 00:05:11.599 "config": [] 00:05:11.599 }, 00:05:11.599 { 00:05:11.599 "subsystem": "nvmf", 00:05:11.599 "config": [ 00:05:11.599 { 00:05:11.599 "method": "nvmf_set_config", 00:05:11.599 "params": { 00:05:11.599 "discovery_filter": "match_any", 00:05:11.599 "admin_cmd_passthru": { 00:05:11.599 "identify_ctrlr": false 00:05:11.599 } 00:05:11.599 } 00:05:11.599 }, 00:05:11.599 { 00:05:11.599 "method": "nvmf_set_max_subsystems", 00:05:11.599 "params": { 00:05:11.599 "max_subsystems": 1024 00:05:11.599 } 00:05:11.599 }, 00:05:11.599 { 00:05:11.599 "method": "nvmf_set_crdt", 00:05:11.599 "params": { 00:05:11.599 "crdt1": 0, 00:05:11.599 "crdt2": 0, 00:05:11.599 "crdt3": 0 00:05:11.599 } 00:05:11.599 }, 00:05:11.599 { 00:05:11.599 "method": "nvmf_create_transport", 00:05:11.599 "params": { 00:05:11.599 "trtype": "TCP", 00:05:11.599 "max_queue_depth": 128, 00:05:11.599 "max_io_qpairs_per_ctrlr": 127, 00:05:11.599 "in_capsule_data_size": 4096, 00:05:11.599 "max_io_size": 131072, 00:05:11.599 "io_unit_size": 131072, 00:05:11.599 "max_aq_depth": 128, 00:05:11.599 "num_shared_buffers": 511, 00:05:11.599 "buf_cache_size": 4294967295, 00:05:11.599 "dif_insert_or_strip": false, 00:05:11.599 "zcopy": false, 00:05:11.599 "c2h_success": true, 00:05:11.599 "sock_priority": 0, 00:05:11.599 "abort_timeout_sec": 1, 00:05:11.599 "ack_timeout": 0, 00:05:11.599 "data_wr_pool_size": 0 00:05:11.599 } 00:05:11.599 } 00:05:11.599 ] 00:05:11.599 }, 00:05:11.599 { 00:05:11.599 "subsystem": "iscsi", 00:05:11.599 "config": [ 00:05:11.599 { 00:05:11.599 "method": "iscsi_set_options", 00:05:11.599 "params": { 00:05:11.599 "node_base": "iqn.2016-06.io.spdk", 00:05:11.599 "max_sessions": 128, 00:05:11.599 "max_connections_per_session": 2, 00:05:11.599 "max_queue_depth": 64, 00:05:11.599 "default_time2wait": 2, 00:05:11.599 "default_time2retain": 20, 00:05:11.599 "first_burst_length": 8192, 00:05:11.599 "immediate_data": true, 00:05:11.599 "allow_duplicated_isid": false, 00:05:11.599 "error_recovery_level": 0, 00:05:11.599 "nop_timeout": 60, 00:05:11.599 "nop_in_interval": 30, 00:05:11.599 "disable_chap": false, 00:05:11.599 "require_chap": false, 00:05:11.599 "mutual_chap": false, 00:05:11.599 "chap_group": 0, 00:05:11.599 "max_large_datain_per_connection": 64, 00:05:11.599 "max_r2t_per_connection": 4, 00:05:11.599 "pdu_pool_size": 36864, 00:05:11.599 "immediate_data_pool_size": 16384, 00:05:11.599 "data_out_pool_size": 2048 00:05:11.599 } 00:05:11.599 } 00:05:11.599 ] 00:05:11.599 } 00:05:11.599 ] 00:05:11.599 } 00:05:11.599 10:43:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:11.599 10:43:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1877999 00:05:11.599 10:43:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1877999 ']' 00:05:11.599 10:43:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1877999 00:05:11.599 10:43:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:11.599 10:43:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:11.599 10:43:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1877999 00:05:11.599 10:43:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:11.599 10:43:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:11.599 10:43:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1877999' 00:05:11.599 killing process with pid 1877999 00:05:11.599 10:43:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1877999 00:05:11.599 10:43:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1877999 00:05:11.860 10:43:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1878150 00:05:11.860 10:43:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:11.860 10:43:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:17.146 10:43:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1878150 00:05:17.146 10:43:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1878150 ']' 00:05:17.146 10:43:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1878150 00:05:17.146 10:43:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:17.146 10:43:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:17.146 10:43:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1878150 00:05:17.146 10:43:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:17.146 10:43:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:17.146 10:43:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1878150' 00:05:17.146 killing process with pid 1878150 00:05:17.146 10:43:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1878150 00:05:17.146 10:43:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1878150 00:05:17.146 10:43:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:17.146 10:43:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:17.146 00:05:17.146 real 0m6.510s 00:05:17.146 user 0m6.396s 00:05:17.146 sys 0m0.540s 00:05:17.146 10:43:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.146 10:43:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:17.146 ************************************ 00:05:17.146 END TEST skip_rpc_with_json 00:05:17.146 ************************************ 00:05:17.146 10:43:33 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:17.146 10:43:33 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:17.146 10:43:33 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.146 10:43:33 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.146 10:43:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.146 ************************************ 00:05:17.146 START TEST skip_rpc_with_delay 00:05:17.146 ************************************ 00:05:17.146 10:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:17.146 10:43:33 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:17.146 10:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:17.146 10:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:17.146 10:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:17.146 10:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:17.146 10:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:17.146 10:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:17.146 10:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:17.146 10:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:17.146 10:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:17.146 10:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:17.146 10:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:17.146 [2024-07-12 10:43:34.019592] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:17.146 [2024-07-12 10:43:34.019679] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:17.146 10:43:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:17.146 10:43:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:17.146 10:43:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:17.146 10:43:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:17.146 00:05:17.146 real 0m0.085s 00:05:17.146 user 0m0.061s 00:05:17.146 sys 0m0.023s 00:05:17.146 10:43:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.146 10:43:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:17.146 ************************************ 00:05:17.146 END TEST skip_rpc_with_delay 00:05:17.146 ************************************ 00:05:17.146 10:43:34 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:17.146 10:43:34 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:17.146 10:43:34 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:17.146 10:43:34 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:17.146 10:43:34 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.146 10:43:34 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.146 10:43:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.146 ************************************ 00:05:17.146 START TEST exit_on_failed_rpc_init 00:05:17.146 ************************************ 00:05:17.146 10:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:17.146 10:43:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1879420 00:05:17.146 10:43:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1879420 00:05:17.146 10:43:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:17.146 10:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 1879420 ']' 00:05:17.146 10:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.146 10:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.146 10:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.146 10:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.146 10:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:17.406 [2024-07-12 10:43:34.175431] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:17.406 [2024-07-12 10:43:34.175489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1879420 ] 00:05:17.406 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.406 [2024-07-12 10:43:34.257333] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.406 [2024-07-12 10:43:34.328347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.976 10:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.976 10:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:17.976 10:43:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:17.976 10:43:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:17.976 10:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:17.976 10:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:17.976 10:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:17.976 10:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:17.976 10:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:17.976 10:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:17.976 10:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:17.976 10:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:17.976 10:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:17.976 10:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:17.976 10:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:18.237 [2024-07-12 10:43:35.005932] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:18.237 [2024-07-12 10:43:35.005987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1879543 ] 00:05:18.237 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.237 [2024-07-12 10:43:35.081465] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.237 [2024-07-12 10:43:35.145369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.237 [2024-07-12 10:43:35.145431] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:18.237 [2024-07-12 10:43:35.145440] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:18.237 [2024-07-12 10:43:35.145447] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:18.237 10:43:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:18.237 10:43:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:18.237 10:43:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:18.237 10:43:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:18.237 10:43:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:18.237 10:43:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:18.237 10:43:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:18.237 10:43:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1879420 00:05:18.237 10:43:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 1879420 ']' 00:05:18.237 10:43:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 1879420 00:05:18.237 10:43:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:18.237 10:43:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:18.237 10:43:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1879420 00:05:18.497 10:43:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:18.497 10:43:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:18.497 10:43:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1879420' 00:05:18.497 killing process with pid 1879420 00:05:18.497 10:43:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 1879420 00:05:18.497 10:43:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 1879420 00:05:18.497 00:05:18.497 real 0m1.330s 00:05:18.497 user 0m1.570s 00:05:18.497 sys 0m0.379s 00:05:18.497 10:43:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.497 10:43:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:18.497 ************************************ 00:05:18.497 END TEST exit_on_failed_rpc_init 00:05:18.497 ************************************ 00:05:18.758 10:43:35 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:18.758 10:43:35 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:18.758 00:05:18.758 real 0m13.597s 00:05:18.758 user 0m13.180s 00:05:18.758 sys 0m1.510s 00:05:18.758 10:43:35 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.758 10:43:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.758 ************************************ 00:05:18.758 END TEST skip_rpc 00:05:18.758 ************************************ 00:05:18.758 10:43:35 -- common/autotest_common.sh@1142 -- # return 0 00:05:18.758 10:43:35 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:18.758 10:43:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.758 10:43:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.758 10:43:35 -- common/autotest_common.sh@10 -- # set +x 00:05:18.758 ************************************ 00:05:18.758 START TEST rpc_client 00:05:18.758 ************************************ 00:05:18.758 10:43:35 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:18.758 * Looking for test storage... 00:05:18.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:18.758 10:43:35 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:18.758 OK 00:05:18.758 10:43:35 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:18.758 00:05:18.758 real 0m0.131s 00:05:18.758 user 0m0.047s 00:05:18.758 sys 0m0.093s 00:05:18.758 10:43:35 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.758 10:43:35 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:18.758 ************************************ 00:05:18.758 END TEST rpc_client 00:05:18.758 ************************************ 00:05:18.758 10:43:35 -- common/autotest_common.sh@1142 -- # return 0 00:05:18.758 10:43:35 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:18.758 10:43:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.758 10:43:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.758 10:43:35 -- common/autotest_common.sh@10 -- # set +x 00:05:19.019 ************************************ 00:05:19.019 START TEST json_config 00:05:19.019 ************************************ 00:05:19.019 10:43:35 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:19.019 10:43:35 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:19.019 10:43:35 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:19.019 10:43:35 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:19.019 10:43:35 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:19.019 10:43:35 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:19.019 10:43:35 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:19.019 10:43:35 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:19.019 10:43:35 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:19.019 10:43:35 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:19.019 10:43:35 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:19.019 10:43:35 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:19.019 10:43:35 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:19.019 10:43:35 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:19.019 10:43:35 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:19.019 10:43:35 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:19.019 10:43:35 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:19.019 10:43:35 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:19.019 10:43:35 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:19.019 10:43:35 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:19.019 10:43:35 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:19.019 10:43:35 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:19.019 10:43:35 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:19.019 10:43:35 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.019 10:43:35 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.019 10:43:35 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.019 10:43:35 json_config -- paths/export.sh@5 -- # export PATH 00:05:19.019 10:43:35 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.019 10:43:35 json_config -- nvmf/common.sh@47 -- # : 0 00:05:19.019 10:43:35 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:19.019 10:43:35 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:19.019 10:43:35 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:19.019 10:43:35 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:19.019 10:43:35 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:19.019 10:43:35 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:19.019 10:43:35 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:19.019 10:43:35 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:19.019 10:43:35 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:19.019 10:43:35 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:19.019 10:43:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:19.019 10:43:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:19.019 10:43:35 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:19.019 10:43:35 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:19.020 10:43:35 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:19.020 10:43:35 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:19.020 10:43:35 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:19.020 10:43:35 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:19.020 10:43:35 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:19.020 10:43:35 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:19.020 10:43:35 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:19.020 10:43:35 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:19.020 10:43:35 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:19.020 10:43:35 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:19.020 INFO: JSON configuration test init 00:05:19.020 10:43:35 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:19.020 10:43:35 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:19.020 10:43:35 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:19.020 10:43:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.020 10:43:35 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:19.020 10:43:35 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:19.020 10:43:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.020 10:43:35 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:19.020 10:43:35 json_config -- json_config/common.sh@9 -- # local app=target 00:05:19.020 10:43:35 json_config -- json_config/common.sh@10 -- # shift 00:05:19.020 10:43:35 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:19.020 10:43:35 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:19.020 10:43:35 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:19.020 10:43:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.020 10:43:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.020 10:43:35 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1879907 00:05:19.020 10:43:35 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:19.020 Waiting for target to run... 00:05:19.020 10:43:35 json_config -- json_config/common.sh@25 -- # waitforlisten 1879907 /var/tmp/spdk_tgt.sock 00:05:19.020 10:43:35 json_config -- common/autotest_common.sh@829 -- # '[' -z 1879907 ']' 00:05:19.020 10:43:35 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:19.020 10:43:35 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.020 10:43:35 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:19.020 10:43:35 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:19.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:19.020 10:43:35 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.020 10:43:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.020 [2024-07-12 10:43:35.960785] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:19.020 [2024-07-12 10:43:35.960857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1879907 ] 00:05:19.020 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.279 [2024-07-12 10:43:36.232644] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.538 [2024-07-12 10:43:36.278494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.797 10:43:36 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.797 10:43:36 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:19.797 10:43:36 json_config -- json_config/common.sh@26 -- # echo '' 00:05:19.797 00:05:19.797 10:43:36 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:19.797 10:43:36 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:19.797 10:43:36 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:19.798 10:43:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.798 10:43:36 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:19.798 10:43:36 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:19.798 10:43:36 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:19.798 10:43:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.798 10:43:36 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:19.798 10:43:36 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:19.798 10:43:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:20.368 10:43:37 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:20.368 10:43:37 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:20.368 10:43:37 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:20.368 10:43:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.368 10:43:37 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:20.368 10:43:37 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:20.368 10:43:37 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:20.368 10:43:37 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:20.368 10:43:37 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:20.368 10:43:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:20.629 10:43:37 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:20.629 10:43:37 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:20.629 10:43:37 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:20.629 10:43:37 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:20.629 10:43:37 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:20.629 10:43:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.629 10:43:37 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:20.629 10:43:37 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:20.629 10:43:37 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:20.629 10:43:37 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:20.629 10:43:37 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:20.629 10:43:37 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:20.629 10:43:37 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:20.629 10:43:37 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:20.629 10:43:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.629 10:43:37 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:20.629 10:43:37 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:20.629 10:43:37 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:20.629 10:43:37 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:20.629 10:43:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:20.889 MallocForNvmf0 00:05:20.889 10:43:37 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:20.889 10:43:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:20.889 MallocForNvmf1 00:05:20.889 10:43:37 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:20.889 10:43:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:21.149 [2024-07-12 10:43:37.979689] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:21.149 10:43:38 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:21.149 10:43:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:21.409 10:43:38 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:21.409 10:43:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:21.409 10:43:38 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:21.409 10:43:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:21.670 10:43:38 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:21.670 10:43:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:21.670 [2024-07-12 10:43:38.629648] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:21.930 10:43:38 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:21.930 10:43:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:21.930 10:43:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.930 10:43:38 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:21.930 10:43:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:21.930 10:43:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.930 10:43:38 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:21.930 10:43:38 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:21.930 10:43:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:21.930 MallocBdevForConfigChangeCheck 00:05:21.930 10:43:38 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:21.930 10:43:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:21.930 10:43:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.191 10:43:38 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:22.191 10:43:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:22.452 10:43:39 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:22.452 INFO: shutting down applications... 00:05:22.452 10:43:39 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:22.452 10:43:39 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:22.452 10:43:39 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:22.452 10:43:39 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:22.714 Calling clear_iscsi_subsystem 00:05:22.714 Calling clear_nvmf_subsystem 00:05:22.714 Calling clear_nbd_subsystem 00:05:22.714 Calling clear_ublk_subsystem 00:05:22.714 Calling clear_vhost_blk_subsystem 00:05:22.714 Calling clear_vhost_scsi_subsystem 00:05:22.714 Calling clear_bdev_subsystem 00:05:22.714 10:43:39 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:22.714 10:43:39 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:22.714 10:43:39 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:22.714 10:43:39 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:22.714 10:43:39 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:22.714 10:43:39 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:22.975 10:43:39 json_config -- json_config/json_config.sh@345 -- # break 00:05:22.975 10:43:39 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:22.975 10:43:39 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:22.975 10:43:39 json_config -- json_config/common.sh@31 -- # local app=target 00:05:22.975 10:43:39 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:22.975 10:43:39 json_config -- json_config/common.sh@35 -- # [[ -n 1879907 ]] 00:05:22.975 10:43:39 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1879907 00:05:22.975 10:43:39 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:22.975 10:43:39 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:22.975 10:43:39 json_config -- json_config/common.sh@41 -- # kill -0 1879907 00:05:22.975 10:43:39 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:23.547 10:43:40 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:23.547 10:43:40 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:23.547 10:43:40 json_config -- json_config/common.sh@41 -- # kill -0 1879907 00:05:23.547 10:43:40 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:23.547 10:43:40 json_config -- json_config/common.sh@43 -- # break 00:05:23.547 10:43:40 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:23.547 10:43:40 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:23.547 SPDK target shutdown done 00:05:23.547 10:43:40 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:23.547 INFO: relaunching applications... 00:05:23.547 10:43:40 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.547 10:43:40 json_config -- json_config/common.sh@9 -- # local app=target 00:05:23.547 10:43:40 json_config -- json_config/common.sh@10 -- # shift 00:05:23.547 10:43:40 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:23.547 10:43:40 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:23.547 10:43:40 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:23.547 10:43:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.547 10:43:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.547 10:43:40 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1880808 00:05:23.547 10:43:40 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:23.547 Waiting for target to run... 00:05:23.548 10:43:40 json_config -- json_config/common.sh@25 -- # waitforlisten 1880808 /var/tmp/spdk_tgt.sock 00:05:23.548 10:43:40 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.548 10:43:40 json_config -- common/autotest_common.sh@829 -- # '[' -z 1880808 ']' 00:05:23.548 10:43:40 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:23.548 10:43:40 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.548 10:43:40 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:23.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:23.548 10:43:40 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.548 10:43:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.548 [2024-07-12 10:43:40.513689] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:23.548 [2024-07-12 10:43:40.513766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1880808 ] 00:05:23.808 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.069 [2024-07-12 10:43:40.884542] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.069 [2024-07-12 10:43:40.942740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.640 [2024-07-12 10:43:41.425571] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:24.640 [2024-07-12 10:43:41.457899] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:24.640 10:43:41 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.640 10:43:41 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:24.640 10:43:41 json_config -- json_config/common.sh@26 -- # echo '' 00:05:24.640 00:05:24.640 10:43:41 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:24.640 10:43:41 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:24.640 INFO: Checking if target configuration is the same... 00:05:24.640 10:43:41 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:24.640 10:43:41 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:24.640 10:43:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:24.640 + '[' 2 -ne 2 ']' 00:05:24.640 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:24.640 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:24.640 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:24.640 +++ basename /dev/fd/62 00:05:24.640 ++ mktemp /tmp/62.XXX 00:05:24.640 + tmp_file_1=/tmp/62.Ldp 00:05:24.640 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:24.640 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:24.640 + tmp_file_2=/tmp/spdk_tgt_config.json.Gni 00:05:24.640 + ret=0 00:05:24.640 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:24.902 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:24.902 + diff -u /tmp/62.Ldp /tmp/spdk_tgt_config.json.Gni 00:05:24.902 + echo 'INFO: JSON config files are the same' 00:05:24.902 INFO: JSON config files are the same 00:05:24.902 + rm /tmp/62.Ldp /tmp/spdk_tgt_config.json.Gni 00:05:24.902 + exit 0 00:05:24.902 10:43:41 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:24.902 10:43:41 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:24.902 INFO: changing configuration and checking if this can be detected... 00:05:24.902 10:43:41 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:24.902 10:43:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:25.189 10:43:42 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:25.189 10:43:42 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:25.189 10:43:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:25.189 + '[' 2 -ne 2 ']' 00:05:25.189 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:25.189 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:25.189 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:25.189 +++ basename /dev/fd/62 00:05:25.189 ++ mktemp /tmp/62.XXX 00:05:25.189 + tmp_file_1=/tmp/62.Xi5 00:05:25.189 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:25.189 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:25.189 + tmp_file_2=/tmp/spdk_tgt_config.json.yFa 00:05:25.189 + ret=0 00:05:25.189 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:25.450 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:25.450 + diff -u /tmp/62.Xi5 /tmp/spdk_tgt_config.json.yFa 00:05:25.450 + ret=1 00:05:25.450 + echo '=== Start of file: /tmp/62.Xi5 ===' 00:05:25.450 + cat /tmp/62.Xi5 00:05:25.450 + echo '=== End of file: /tmp/62.Xi5 ===' 00:05:25.450 + echo '' 00:05:25.450 + echo '=== Start of file: /tmp/spdk_tgt_config.json.yFa ===' 00:05:25.450 + cat /tmp/spdk_tgt_config.json.yFa 00:05:25.450 + echo '=== End of file: /tmp/spdk_tgt_config.json.yFa ===' 00:05:25.450 + echo '' 00:05:25.450 + rm /tmp/62.Xi5 /tmp/spdk_tgt_config.json.yFa 00:05:25.450 + exit 1 00:05:25.450 10:43:42 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:25.450 INFO: configuration change detected. 00:05:25.450 10:43:42 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:25.450 10:43:42 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:25.450 10:43:42 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:25.450 10:43:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.450 10:43:42 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:25.451 10:43:42 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:25.451 10:43:42 json_config -- json_config/json_config.sh@317 -- # [[ -n 1880808 ]] 00:05:25.451 10:43:42 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:25.451 10:43:42 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:25.451 10:43:42 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:25.451 10:43:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.451 10:43:42 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:25.451 10:43:42 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:25.451 10:43:42 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:25.451 10:43:42 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:25.451 10:43:42 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:25.451 10:43:42 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:25.451 10:43:42 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:25.451 10:43:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.712 10:43:42 json_config -- json_config/json_config.sh@323 -- # killprocess 1880808 00:05:25.712 10:43:42 json_config -- common/autotest_common.sh@948 -- # '[' -z 1880808 ']' 00:05:25.712 10:43:42 json_config -- common/autotest_common.sh@952 -- # kill -0 1880808 00:05:25.712 10:43:42 json_config -- common/autotest_common.sh@953 -- # uname 00:05:25.712 10:43:42 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:25.712 10:43:42 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1880808 00:05:25.712 10:43:42 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:25.712 10:43:42 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:25.712 10:43:42 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1880808' 00:05:25.712 killing process with pid 1880808 00:05:25.712 10:43:42 json_config -- common/autotest_common.sh@967 -- # kill 1880808 00:05:25.712 10:43:42 json_config -- common/autotest_common.sh@972 -- # wait 1880808 00:05:25.974 10:43:42 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:25.974 10:43:42 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:25.974 10:43:42 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:25.974 10:43:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.974 10:43:42 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:25.974 10:43:42 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:25.974 INFO: Success 00:05:25.974 00:05:25.974 real 0m7.051s 00:05:25.974 user 0m8.464s 00:05:25.974 sys 0m1.808s 00:05:25.974 10:43:42 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.974 10:43:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.974 ************************************ 00:05:25.974 END TEST json_config 00:05:25.974 ************************************ 00:05:25.974 10:43:42 -- common/autotest_common.sh@1142 -- # return 0 00:05:25.974 10:43:42 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:25.974 10:43:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.974 10:43:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.974 10:43:42 -- common/autotest_common.sh@10 -- # set +x 00:05:25.974 ************************************ 00:05:25.974 START TEST json_config_extra_key 00:05:25.974 ************************************ 00:05:25.974 10:43:42 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:26.237 10:43:42 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:26.238 10:43:42 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:26.238 10:43:42 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:26.238 10:43:42 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:26.238 10:43:42 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:26.238 10:43:42 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:26.238 10:43:42 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:26.238 10:43:42 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:26.238 10:43:42 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:26.238 10:43:42 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:26.238 10:43:42 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:26.238 10:43:42 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:26.238 10:43:42 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:26.238 10:43:42 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:26.238 10:43:42 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:26.238 10:43:42 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:26.238 10:43:42 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:26.238 10:43:42 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:26.238 10:43:42 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:26.238 10:43:42 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:26.238 10:43:42 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:26.238 10:43:42 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:26.238 10:43:42 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.238 10:43:42 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.238 10:43:42 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.238 10:43:42 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:26.238 10:43:42 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.238 10:43:42 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:26.238 10:43:42 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:26.238 10:43:42 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:26.238 10:43:43 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:26.238 10:43:43 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:26.238 10:43:43 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:26.238 10:43:43 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:26.238 10:43:43 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:26.238 10:43:43 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:26.238 10:43:43 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:26.238 10:43:43 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:26.238 10:43:43 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:26.238 10:43:43 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:26.238 10:43:43 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:26.238 10:43:43 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:26.238 10:43:43 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:26.238 10:43:43 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:26.238 10:43:43 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:26.238 10:43:43 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:26.238 10:43:43 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:26.238 INFO: launching applications... 00:05:26.238 10:43:43 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:26.238 10:43:43 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:26.238 10:43:43 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:26.238 10:43:43 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:26.238 10:43:43 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:26.238 10:43:43 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:26.238 10:43:43 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:26.238 10:43:43 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:26.238 10:43:43 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1881566 00:05:26.238 10:43:43 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:26.238 Waiting for target to run... 00:05:26.238 10:43:43 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1881566 /var/tmp/spdk_tgt.sock 00:05:26.238 10:43:43 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 1881566 ']' 00:05:26.238 10:43:43 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:26.238 10:43:43 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:26.238 10:43:43 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.238 10:43:43 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:26.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:26.238 10:43:43 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.238 10:43:43 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:26.238 [2024-07-12 10:43:43.069110] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:26.238 [2024-07-12 10:43:43.069205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1881566 ] 00:05:26.238 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.500 [2024-07-12 10:43:43.355611] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.500 [2024-07-12 10:43:43.397957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.070 10:43:43 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.070 10:43:43 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:27.070 10:43:43 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:27.070 00:05:27.070 10:43:43 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:27.070 INFO: shutting down applications... 00:05:27.070 10:43:43 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:27.070 10:43:43 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:27.070 10:43:43 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:27.070 10:43:43 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1881566 ]] 00:05:27.070 10:43:43 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1881566 00:05:27.070 10:43:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:27.070 10:43:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:27.070 10:43:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1881566 00:05:27.070 10:43:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:27.643 10:43:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:27.643 10:43:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:27.643 10:43:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1881566 00:05:27.643 10:43:44 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:27.643 10:43:44 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:27.643 10:43:44 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:27.643 10:43:44 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:27.643 SPDK target shutdown done 00:05:27.643 10:43:44 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:27.643 Success 00:05:27.643 00:05:27.643 real 0m1.434s 00:05:27.643 user 0m1.060s 00:05:27.643 sys 0m0.364s 00:05:27.643 10:43:44 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.643 10:43:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:27.643 ************************************ 00:05:27.643 END TEST json_config_extra_key 00:05:27.643 ************************************ 00:05:27.643 10:43:44 -- common/autotest_common.sh@1142 -- # return 0 00:05:27.643 10:43:44 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:27.643 10:43:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.643 10:43:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.643 10:43:44 -- common/autotest_common.sh@10 -- # set +x 00:05:27.643 ************************************ 00:05:27.643 START TEST alias_rpc 00:05:27.643 ************************************ 00:05:27.643 10:43:44 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:27.643 * Looking for test storage... 00:05:27.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:27.643 10:43:44 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:27.643 10:43:44 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1881948 00:05:27.643 10:43:44 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1881948 00:05:27.643 10:43:44 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.643 10:43:44 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 1881948 ']' 00:05:27.643 10:43:44 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.643 10:43:44 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.643 10:43:44 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.643 10:43:44 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.643 10:43:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.643 [2024-07-12 10:43:44.581714] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:27.643 [2024-07-12 10:43:44.581780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1881948 ] 00:05:27.643 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.904 [2024-07-12 10:43:44.662362] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.904 [2024-07-12 10:43:44.729322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.475 10:43:45 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.475 10:43:45 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:28.475 10:43:45 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:28.736 10:43:45 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1881948 00:05:28.736 10:43:45 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 1881948 ']' 00:05:28.736 10:43:45 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 1881948 00:05:28.736 10:43:45 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:28.736 10:43:45 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:28.736 10:43:45 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1881948 00:05:28.736 10:43:45 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:28.736 10:43:45 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:28.736 10:43:45 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1881948' 00:05:28.736 killing process with pid 1881948 00:05:28.736 10:43:45 alias_rpc -- common/autotest_common.sh@967 -- # kill 1881948 00:05:28.737 10:43:45 alias_rpc -- common/autotest_common.sh@972 -- # wait 1881948 00:05:28.998 00:05:28.998 real 0m1.376s 00:05:28.998 user 0m1.513s 00:05:28.998 sys 0m0.398s 00:05:28.998 10:43:45 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.998 10:43:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.998 ************************************ 00:05:28.998 END TEST alias_rpc 00:05:28.998 ************************************ 00:05:28.998 10:43:45 -- common/autotest_common.sh@1142 -- # return 0 00:05:28.998 10:43:45 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:28.998 10:43:45 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:28.998 10:43:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.998 10:43:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.998 10:43:45 -- common/autotest_common.sh@10 -- # set +x 00:05:28.998 ************************************ 00:05:28.998 START TEST spdkcli_tcp 00:05:28.998 ************************************ 00:05:28.998 10:43:45 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:28.998 * Looking for test storage... 00:05:28.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:28.998 10:43:45 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:28.998 10:43:45 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:28.998 10:43:45 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:28.998 10:43:45 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:28.998 10:43:45 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:28.998 10:43:45 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:28.998 10:43:45 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:28.998 10:43:45 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:28.998 10:43:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:28.998 10:43:45 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1882239 00:05:28.998 10:43:45 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1882239 00:05:29.259 10:43:45 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:29.259 10:43:45 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 1882239 ']' 00:05:29.259 10:43:45 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.259 10:43:45 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.259 10:43:45 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.259 10:43:45 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.259 10:43:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:29.259 [2024-07-12 10:43:46.041767] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:29.259 [2024-07-12 10:43:46.041843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1882239 ] 00:05:29.259 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.259 [2024-07-12 10:43:46.119621] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.259 [2024-07-12 10:43:46.182976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.259 [2024-07-12 10:43:46.182978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.830 10:43:46 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.830 10:43:46 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:29.830 10:43:46 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1882350 00:05:29.830 10:43:46 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:29.830 10:43:46 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:30.091 [ 00:05:30.091 "bdev_malloc_delete", 00:05:30.091 "bdev_malloc_create", 00:05:30.091 "bdev_null_resize", 00:05:30.091 "bdev_null_delete", 00:05:30.091 "bdev_null_create", 00:05:30.091 "bdev_nvme_cuse_unregister", 00:05:30.091 "bdev_nvme_cuse_register", 00:05:30.091 "bdev_opal_new_user", 00:05:30.091 "bdev_opal_set_lock_state", 00:05:30.091 "bdev_opal_delete", 00:05:30.091 "bdev_opal_get_info", 00:05:30.091 "bdev_opal_create", 00:05:30.091 "bdev_nvme_opal_revert", 00:05:30.091 "bdev_nvme_opal_init", 00:05:30.091 "bdev_nvme_send_cmd", 00:05:30.091 "bdev_nvme_get_path_iostat", 00:05:30.091 "bdev_nvme_get_mdns_discovery_info", 00:05:30.091 "bdev_nvme_stop_mdns_discovery", 00:05:30.091 "bdev_nvme_start_mdns_discovery", 00:05:30.091 "bdev_nvme_set_multipath_policy", 00:05:30.091 "bdev_nvme_set_preferred_path", 00:05:30.091 "bdev_nvme_get_io_paths", 00:05:30.091 "bdev_nvme_remove_error_injection", 00:05:30.091 "bdev_nvme_add_error_injection", 00:05:30.091 "bdev_nvme_get_discovery_info", 00:05:30.091 "bdev_nvme_stop_discovery", 00:05:30.091 "bdev_nvme_start_discovery", 00:05:30.091 "bdev_nvme_get_controller_health_info", 00:05:30.091 "bdev_nvme_disable_controller", 00:05:30.091 "bdev_nvme_enable_controller", 00:05:30.091 "bdev_nvme_reset_controller", 00:05:30.091 "bdev_nvme_get_transport_statistics", 00:05:30.091 "bdev_nvme_apply_firmware", 00:05:30.091 "bdev_nvme_detach_controller", 00:05:30.091 "bdev_nvme_get_controllers", 00:05:30.091 "bdev_nvme_attach_controller", 00:05:30.091 "bdev_nvme_set_hotplug", 00:05:30.091 "bdev_nvme_set_options", 00:05:30.091 "bdev_passthru_delete", 00:05:30.091 "bdev_passthru_create", 00:05:30.091 "bdev_lvol_set_parent_bdev", 00:05:30.091 "bdev_lvol_set_parent", 00:05:30.091 "bdev_lvol_check_shallow_copy", 00:05:30.091 "bdev_lvol_start_shallow_copy", 00:05:30.091 "bdev_lvol_grow_lvstore", 00:05:30.091 "bdev_lvol_get_lvols", 00:05:30.091 "bdev_lvol_get_lvstores", 00:05:30.091 "bdev_lvol_delete", 00:05:30.091 "bdev_lvol_set_read_only", 00:05:30.091 "bdev_lvol_resize", 00:05:30.091 "bdev_lvol_decouple_parent", 00:05:30.091 "bdev_lvol_inflate", 00:05:30.091 "bdev_lvol_rename", 00:05:30.091 "bdev_lvol_clone_bdev", 00:05:30.091 "bdev_lvol_clone", 00:05:30.091 "bdev_lvol_snapshot", 00:05:30.091 "bdev_lvol_create", 00:05:30.091 "bdev_lvol_delete_lvstore", 00:05:30.091 "bdev_lvol_rename_lvstore", 00:05:30.091 "bdev_lvol_create_lvstore", 00:05:30.091 "bdev_raid_set_options", 00:05:30.091 "bdev_raid_remove_base_bdev", 00:05:30.091 "bdev_raid_add_base_bdev", 00:05:30.091 "bdev_raid_delete", 00:05:30.091 "bdev_raid_create", 00:05:30.091 "bdev_raid_get_bdevs", 00:05:30.091 "bdev_error_inject_error", 00:05:30.091 "bdev_error_delete", 00:05:30.091 "bdev_error_create", 00:05:30.091 "bdev_split_delete", 00:05:30.091 "bdev_split_create", 00:05:30.091 "bdev_delay_delete", 00:05:30.091 "bdev_delay_create", 00:05:30.091 "bdev_delay_update_latency", 00:05:30.091 "bdev_zone_block_delete", 00:05:30.091 "bdev_zone_block_create", 00:05:30.091 "blobfs_create", 00:05:30.091 "blobfs_detect", 00:05:30.092 "blobfs_set_cache_size", 00:05:30.092 "bdev_aio_delete", 00:05:30.092 "bdev_aio_rescan", 00:05:30.092 "bdev_aio_create", 00:05:30.092 "bdev_ftl_set_property", 00:05:30.092 "bdev_ftl_get_properties", 00:05:30.092 "bdev_ftl_get_stats", 00:05:30.092 "bdev_ftl_unmap", 00:05:30.092 "bdev_ftl_unload", 00:05:30.092 "bdev_ftl_delete", 00:05:30.092 "bdev_ftl_load", 00:05:30.092 "bdev_ftl_create", 00:05:30.092 "bdev_virtio_attach_controller", 00:05:30.092 "bdev_virtio_scsi_get_devices", 00:05:30.092 "bdev_virtio_detach_controller", 00:05:30.092 "bdev_virtio_blk_set_hotplug", 00:05:30.092 "bdev_iscsi_delete", 00:05:30.092 "bdev_iscsi_create", 00:05:30.092 "bdev_iscsi_set_options", 00:05:30.092 "accel_error_inject_error", 00:05:30.092 "ioat_scan_accel_module", 00:05:30.092 "dsa_scan_accel_module", 00:05:30.092 "iaa_scan_accel_module", 00:05:30.092 "vfu_virtio_create_scsi_endpoint", 00:05:30.092 "vfu_virtio_scsi_remove_target", 00:05:30.092 "vfu_virtio_scsi_add_target", 00:05:30.092 "vfu_virtio_create_blk_endpoint", 00:05:30.092 "vfu_virtio_delete_endpoint", 00:05:30.092 "keyring_file_remove_key", 00:05:30.092 "keyring_file_add_key", 00:05:30.092 "keyring_linux_set_options", 00:05:30.092 "iscsi_get_histogram", 00:05:30.092 "iscsi_enable_histogram", 00:05:30.092 "iscsi_set_options", 00:05:30.092 "iscsi_get_auth_groups", 00:05:30.092 "iscsi_auth_group_remove_secret", 00:05:30.092 "iscsi_auth_group_add_secret", 00:05:30.092 "iscsi_delete_auth_group", 00:05:30.092 "iscsi_create_auth_group", 00:05:30.092 "iscsi_set_discovery_auth", 00:05:30.092 "iscsi_get_options", 00:05:30.092 "iscsi_target_node_request_logout", 00:05:30.092 "iscsi_target_node_set_redirect", 00:05:30.092 "iscsi_target_node_set_auth", 00:05:30.092 "iscsi_target_node_add_lun", 00:05:30.092 "iscsi_get_stats", 00:05:30.092 "iscsi_get_connections", 00:05:30.092 "iscsi_portal_group_set_auth", 00:05:30.092 "iscsi_start_portal_group", 00:05:30.092 "iscsi_delete_portal_group", 00:05:30.092 "iscsi_create_portal_group", 00:05:30.092 "iscsi_get_portal_groups", 00:05:30.092 "iscsi_delete_target_node", 00:05:30.092 "iscsi_target_node_remove_pg_ig_maps", 00:05:30.092 "iscsi_target_node_add_pg_ig_maps", 00:05:30.092 "iscsi_create_target_node", 00:05:30.092 "iscsi_get_target_nodes", 00:05:30.092 "iscsi_delete_initiator_group", 00:05:30.092 "iscsi_initiator_group_remove_initiators", 00:05:30.092 "iscsi_initiator_group_add_initiators", 00:05:30.092 "iscsi_create_initiator_group", 00:05:30.092 "iscsi_get_initiator_groups", 00:05:30.092 "nvmf_set_crdt", 00:05:30.092 "nvmf_set_config", 00:05:30.092 "nvmf_set_max_subsystems", 00:05:30.092 "nvmf_stop_mdns_prr", 00:05:30.092 "nvmf_publish_mdns_prr", 00:05:30.092 "nvmf_subsystem_get_listeners", 00:05:30.092 "nvmf_subsystem_get_qpairs", 00:05:30.092 "nvmf_subsystem_get_controllers", 00:05:30.092 "nvmf_get_stats", 00:05:30.092 "nvmf_get_transports", 00:05:30.092 "nvmf_create_transport", 00:05:30.092 "nvmf_get_targets", 00:05:30.092 "nvmf_delete_target", 00:05:30.092 "nvmf_create_target", 00:05:30.092 "nvmf_subsystem_allow_any_host", 00:05:30.092 "nvmf_subsystem_remove_host", 00:05:30.092 "nvmf_subsystem_add_host", 00:05:30.092 "nvmf_ns_remove_host", 00:05:30.092 "nvmf_ns_add_host", 00:05:30.092 "nvmf_subsystem_remove_ns", 00:05:30.092 "nvmf_subsystem_add_ns", 00:05:30.092 "nvmf_subsystem_listener_set_ana_state", 00:05:30.092 "nvmf_discovery_get_referrals", 00:05:30.092 "nvmf_discovery_remove_referral", 00:05:30.092 "nvmf_discovery_add_referral", 00:05:30.092 "nvmf_subsystem_remove_listener", 00:05:30.092 "nvmf_subsystem_add_listener", 00:05:30.092 "nvmf_delete_subsystem", 00:05:30.092 "nvmf_create_subsystem", 00:05:30.092 "nvmf_get_subsystems", 00:05:30.092 "env_dpdk_get_mem_stats", 00:05:30.092 "nbd_get_disks", 00:05:30.092 "nbd_stop_disk", 00:05:30.092 "nbd_start_disk", 00:05:30.092 "ublk_recover_disk", 00:05:30.092 "ublk_get_disks", 00:05:30.092 "ublk_stop_disk", 00:05:30.092 "ublk_start_disk", 00:05:30.092 "ublk_destroy_target", 00:05:30.092 "ublk_create_target", 00:05:30.092 "virtio_blk_create_transport", 00:05:30.092 "virtio_blk_get_transports", 00:05:30.092 "vhost_controller_set_coalescing", 00:05:30.092 "vhost_get_controllers", 00:05:30.092 "vhost_delete_controller", 00:05:30.092 "vhost_create_blk_controller", 00:05:30.092 "vhost_scsi_controller_remove_target", 00:05:30.092 "vhost_scsi_controller_add_target", 00:05:30.092 "vhost_start_scsi_controller", 00:05:30.092 "vhost_create_scsi_controller", 00:05:30.092 "thread_set_cpumask", 00:05:30.092 "framework_get_governor", 00:05:30.092 "framework_get_scheduler", 00:05:30.092 "framework_set_scheduler", 00:05:30.092 "framework_get_reactors", 00:05:30.092 "thread_get_io_channels", 00:05:30.092 "thread_get_pollers", 00:05:30.092 "thread_get_stats", 00:05:30.092 "framework_monitor_context_switch", 00:05:30.092 "spdk_kill_instance", 00:05:30.092 "log_enable_timestamps", 00:05:30.092 "log_get_flags", 00:05:30.092 "log_clear_flag", 00:05:30.092 "log_set_flag", 00:05:30.092 "log_get_level", 00:05:30.092 "log_set_level", 00:05:30.092 "log_get_print_level", 00:05:30.092 "log_set_print_level", 00:05:30.092 "framework_enable_cpumask_locks", 00:05:30.092 "framework_disable_cpumask_locks", 00:05:30.092 "framework_wait_init", 00:05:30.092 "framework_start_init", 00:05:30.092 "scsi_get_devices", 00:05:30.092 "bdev_get_histogram", 00:05:30.092 "bdev_enable_histogram", 00:05:30.092 "bdev_set_qos_limit", 00:05:30.092 "bdev_set_qd_sampling_period", 00:05:30.092 "bdev_get_bdevs", 00:05:30.092 "bdev_reset_iostat", 00:05:30.092 "bdev_get_iostat", 00:05:30.092 "bdev_examine", 00:05:30.092 "bdev_wait_for_examine", 00:05:30.092 "bdev_set_options", 00:05:30.092 "notify_get_notifications", 00:05:30.092 "notify_get_types", 00:05:30.092 "accel_get_stats", 00:05:30.092 "accel_set_options", 00:05:30.092 "accel_set_driver", 00:05:30.092 "accel_crypto_key_destroy", 00:05:30.092 "accel_crypto_keys_get", 00:05:30.092 "accel_crypto_key_create", 00:05:30.092 "accel_assign_opc", 00:05:30.092 "accel_get_module_info", 00:05:30.092 "accel_get_opc_assignments", 00:05:30.092 "vmd_rescan", 00:05:30.092 "vmd_remove_device", 00:05:30.092 "vmd_enable", 00:05:30.092 "sock_get_default_impl", 00:05:30.092 "sock_set_default_impl", 00:05:30.092 "sock_impl_set_options", 00:05:30.092 "sock_impl_get_options", 00:05:30.092 "iobuf_get_stats", 00:05:30.092 "iobuf_set_options", 00:05:30.092 "keyring_get_keys", 00:05:30.092 "framework_get_pci_devices", 00:05:30.092 "framework_get_config", 00:05:30.092 "framework_get_subsystems", 00:05:30.092 "vfu_tgt_set_base_path", 00:05:30.092 "trace_get_info", 00:05:30.092 "trace_get_tpoint_group_mask", 00:05:30.092 "trace_disable_tpoint_group", 00:05:30.092 "trace_enable_tpoint_group", 00:05:30.092 "trace_clear_tpoint_mask", 00:05:30.092 "trace_set_tpoint_mask", 00:05:30.092 "spdk_get_version", 00:05:30.092 "rpc_get_methods" 00:05:30.092 ] 00:05:30.092 10:43:46 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:30.092 10:43:46 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:30.092 10:43:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:30.092 10:43:47 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:30.092 10:43:47 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1882239 00:05:30.092 10:43:47 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 1882239 ']' 00:05:30.092 10:43:47 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 1882239 00:05:30.092 10:43:47 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:30.092 10:43:47 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:30.092 10:43:47 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1882239 00:05:30.092 10:43:47 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:30.092 10:43:47 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:30.092 10:43:47 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1882239' 00:05:30.092 killing process with pid 1882239 00:05:30.092 10:43:47 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 1882239 00:05:30.092 10:43:47 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 1882239 00:05:30.354 00:05:30.354 real 0m1.383s 00:05:30.354 user 0m2.558s 00:05:30.354 sys 0m0.408s 00:05:30.354 10:43:47 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.354 10:43:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:30.354 ************************************ 00:05:30.354 END TEST spdkcli_tcp 00:05:30.354 ************************************ 00:05:30.354 10:43:47 -- common/autotest_common.sh@1142 -- # return 0 00:05:30.354 10:43:47 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:30.354 10:43:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.354 10:43:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.354 10:43:47 -- common/autotest_common.sh@10 -- # set +x 00:05:30.354 ************************************ 00:05:30.354 START TEST dpdk_mem_utility 00:05:30.354 ************************************ 00:05:30.354 10:43:47 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:30.615 * Looking for test storage... 00:05:30.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:30.615 10:43:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:30.615 10:43:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1882528 00:05:30.615 10:43:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1882528 00:05:30.615 10:43:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.615 10:43:47 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 1882528 ']' 00:05:30.615 10:43:47 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.615 10:43:47 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.615 10:43:47 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.615 10:43:47 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.615 10:43:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:30.615 [2024-07-12 10:43:47.492648] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:30.615 [2024-07-12 10:43:47.492723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1882528 ] 00:05:30.615 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.615 [2024-07-12 10:43:47.569791] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.876 [2024-07-12 10:43:47.632271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.448 10:43:48 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.448 10:43:48 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:31.448 10:43:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:31.448 10:43:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:31.448 10:43:48 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.448 10:43:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:31.448 { 00:05:31.448 "filename": "/tmp/spdk_mem_dump.txt" 00:05:31.448 } 00:05:31.448 10:43:48 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.448 10:43:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:31.448 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:31.448 1 heaps totaling size 814.000000 MiB 00:05:31.448 size: 814.000000 MiB heap id: 0 00:05:31.448 end heaps---------- 00:05:31.448 8 mempools totaling size 598.116089 MiB 00:05:31.448 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:31.448 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:31.448 size: 84.521057 MiB name: bdev_io_1882528 00:05:31.448 size: 51.011292 MiB name: evtpool_1882528 00:05:31.448 size: 50.003479 MiB name: msgpool_1882528 00:05:31.448 size: 21.763794 MiB name: PDU_Pool 00:05:31.448 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:31.448 size: 0.026123 MiB name: Session_Pool 00:05:31.448 end mempools------- 00:05:31.448 6 memzones totaling size 4.142822 MiB 00:05:31.448 size: 1.000366 MiB name: RG_ring_0_1882528 00:05:31.448 size: 1.000366 MiB name: RG_ring_1_1882528 00:05:31.448 size: 1.000366 MiB name: RG_ring_4_1882528 00:05:31.448 size: 1.000366 MiB name: RG_ring_5_1882528 00:05:31.448 size: 0.125366 MiB name: RG_ring_2_1882528 00:05:31.448 size: 0.015991 MiB name: RG_ring_3_1882528 00:05:31.448 end memzones------- 00:05:31.448 10:43:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:31.448 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:31.448 list of free elements. size: 12.519348 MiB 00:05:31.448 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:31.448 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:31.448 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:31.448 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:31.448 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:31.448 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:31.448 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:31.448 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:31.448 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:31.448 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:31.448 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:31.448 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:31.448 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:31.448 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:31.448 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:31.448 list of standard malloc elements. size: 199.218079 MiB 00:05:31.448 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:31.448 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:31.448 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:31.448 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:31.448 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:31.448 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:31.448 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:31.448 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:31.448 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:31.448 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:31.448 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:31.448 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:31.448 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:31.448 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:31.448 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:31.448 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:31.449 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:31.449 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:31.449 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:31.449 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:31.449 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:31.449 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:31.449 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:31.449 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:31.449 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:31.449 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:31.449 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:31.449 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:31.449 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:31.449 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:31.449 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:31.449 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:31.449 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:31.449 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:31.449 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:31.449 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:31.449 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:31.449 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:31.449 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:31.449 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:31.449 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:31.449 list of memzone associated elements. size: 602.262573 MiB 00:05:31.449 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:31.449 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:31.449 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:31.449 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:31.449 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:31.449 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1882528_0 00:05:31.449 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:31.449 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1882528_0 00:05:31.449 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:31.449 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1882528_0 00:05:31.449 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:31.449 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:31.449 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:31.449 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:31.449 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:31.449 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1882528 00:05:31.449 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:31.449 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1882528 00:05:31.449 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:31.449 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1882528 00:05:31.449 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:31.449 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:31.449 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:31.449 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:31.449 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:31.449 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:31.449 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:31.449 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:31.449 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:31.449 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1882528 00:05:31.449 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:31.449 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1882528 00:05:31.449 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:31.449 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1882528 00:05:31.449 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:31.449 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1882528 00:05:31.449 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:31.449 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1882528 00:05:31.449 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:31.449 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:31.449 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:31.449 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:31.449 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:31.449 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:31.449 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:31.449 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1882528 00:05:31.449 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:31.449 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:31.449 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:31.449 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:31.449 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:31.449 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1882528 00:05:31.449 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:31.449 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:31.449 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:31.449 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1882528 00:05:31.449 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:31.449 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1882528 00:05:31.449 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:31.449 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:31.449 10:43:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:31.449 10:43:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1882528 00:05:31.449 10:43:48 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 1882528 ']' 00:05:31.449 10:43:48 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 1882528 00:05:31.449 10:43:48 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:31.449 10:43:48 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:31.449 10:43:48 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1882528 00:05:31.449 10:43:48 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:31.449 10:43:48 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:31.449 10:43:48 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1882528' 00:05:31.449 killing process with pid 1882528 00:05:31.449 10:43:48 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 1882528 00:05:31.449 10:43:48 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 1882528 00:05:31.710 00:05:31.710 real 0m1.277s 00:05:31.710 user 0m1.326s 00:05:31.710 sys 0m0.396s 00:05:31.710 10:43:48 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.710 10:43:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:31.710 ************************************ 00:05:31.710 END TEST dpdk_mem_utility 00:05:31.710 ************************************ 00:05:31.710 10:43:48 -- common/autotest_common.sh@1142 -- # return 0 00:05:31.710 10:43:48 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:31.710 10:43:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.710 10:43:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.710 10:43:48 -- common/autotest_common.sh@10 -- # set +x 00:05:31.710 ************************************ 00:05:31.710 START TEST event 00:05:31.710 ************************************ 00:05:31.710 10:43:48 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:31.970 * Looking for test storage... 00:05:31.970 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:31.970 10:43:48 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:31.970 10:43:48 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:31.970 10:43:48 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:31.970 10:43:48 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:31.970 10:43:48 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.970 10:43:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.970 ************************************ 00:05:31.970 START TEST event_perf 00:05:31.970 ************************************ 00:05:31.970 10:43:48 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:31.970 Running I/O for 1 seconds...[2024-07-12 10:43:48.838772] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:31.970 [2024-07-12 10:43:48.838872] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1882814 ] 00:05:31.970 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.970 [2024-07-12 10:43:48.899258] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:32.230 [2024-07-12 10:43:48.955747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.230 [2024-07-12 10:43:48.955877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.230 [2024-07-12 10:43:48.956005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.230 Running I/O for 1 seconds...[2024-07-12 10:43:48.956007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:33.172 00:05:33.172 lcore 0: 173336 00:05:33.173 lcore 1: 173338 00:05:33.173 lcore 2: 173338 00:05:33.173 lcore 3: 173337 00:05:33.173 done. 00:05:33.173 00:05:33.173 real 0m1.182s 00:05:33.173 user 0m4.111s 00:05:33.173 sys 0m0.062s 00:05:33.173 10:43:49 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.173 10:43:49 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:33.173 ************************************ 00:05:33.173 END TEST event_perf 00:05:33.173 ************************************ 00:05:33.173 10:43:50 event -- common/autotest_common.sh@1142 -- # return 0 00:05:33.173 10:43:50 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:33.173 10:43:50 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:33.173 10:43:50 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.173 10:43:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.173 ************************************ 00:05:33.173 START TEST event_reactor 00:05:33.173 ************************************ 00:05:33.173 10:43:50 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:33.173 [2024-07-12 10:43:50.097818] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:33.173 [2024-07-12 10:43:50.097896] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1883173 ] 00:05:33.173 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.433 [2024-07-12 10:43:50.176848] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.433 [2024-07-12 10:43:50.242837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.442 test_start 00:05:34.442 oneshot 00:05:34.442 tick 100 00:05:34.442 tick 100 00:05:34.442 tick 250 00:05:34.442 tick 100 00:05:34.442 tick 100 00:05:34.442 tick 250 00:05:34.442 tick 100 00:05:34.442 tick 500 00:05:34.442 tick 100 00:05:34.442 tick 100 00:05:34.442 tick 250 00:05:34.442 tick 100 00:05:34.442 tick 100 00:05:34.442 test_end 00:05:34.442 00:05:34.442 real 0m1.210s 00:05:34.442 user 0m1.114s 00:05:34.442 sys 0m0.091s 00:05:34.442 10:43:51 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.442 10:43:51 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:34.442 ************************************ 00:05:34.442 END TEST event_reactor 00:05:34.442 ************************************ 00:05:34.442 10:43:51 event -- common/autotest_common.sh@1142 -- # return 0 00:05:34.443 10:43:51 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:34.443 10:43:51 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:34.443 10:43:51 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.443 10:43:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.443 ************************************ 00:05:34.443 START TEST event_reactor_perf 00:05:34.443 ************************************ 00:05:34.443 10:43:51 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:34.443 [2024-07-12 10:43:51.382358] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:34.443 [2024-07-12 10:43:51.382466] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1883525 ] 00:05:34.443 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.703 [2024-07-12 10:43:51.458156] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.703 [2024-07-12 10:43:51.510660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.644 test_start 00:05:35.644 test_end 00:05:35.644 Performance: 536815 events per second 00:05:35.644 00:05:35.644 real 0m1.192s 00:05:35.644 user 0m1.112s 00:05:35.644 sys 0m0.077s 00:05:35.644 10:43:52 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.644 10:43:52 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:35.645 ************************************ 00:05:35.645 END TEST event_reactor_perf 00:05:35.645 ************************************ 00:05:35.645 10:43:52 event -- common/autotest_common.sh@1142 -- # return 0 00:05:35.645 10:43:52 event -- event/event.sh@49 -- # uname -s 00:05:35.645 10:43:52 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:35.645 10:43:52 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:35.645 10:43:52 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.645 10:43:52 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.645 10:43:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.906 ************************************ 00:05:35.906 START TEST event_scheduler 00:05:35.906 ************************************ 00:05:35.906 10:43:52 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:35.906 * Looking for test storage... 00:05:35.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:35.906 10:43:52 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:35.906 10:43:52 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1883810 00:05:35.906 10:43:52 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:35.906 10:43:52 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:35.906 10:43:52 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1883810 00:05:35.906 10:43:52 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 1883810 ']' 00:05:35.906 10:43:52 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.906 10:43:52 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.906 10:43:52 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.906 10:43:52 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.906 10:43:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.906 [2024-07-12 10:43:52.788473] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:35.906 [2024-07-12 10:43:52.788541] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1883810 ] 00:05:35.906 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.906 [2024-07-12 10:43:52.870589] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:36.166 [2024-07-12 10:43:52.966149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.166 [2024-07-12 10:43:52.966266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.166 [2024-07-12 10:43:52.966513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:36.166 [2024-07-12 10:43:52.966518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.736 10:43:53 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.736 10:43:53 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:36.736 10:43:53 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:36.736 10:43:53 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.736 10:43:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:36.736 [2024-07-12 10:43:53.580778] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:36.736 [2024-07-12 10:43:53.580796] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:36.736 [2024-07-12 10:43:53.580806] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:36.736 [2024-07-12 10:43:53.580811] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:36.736 [2024-07-12 10:43:53.580817] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:36.736 10:43:53 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.736 10:43:53 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:36.736 10:43:53 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.736 10:43:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:36.736 [2024-07-12 10:43:53.641283] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:36.736 10:43:53 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.736 10:43:53 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:36.736 10:43:53 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.736 10:43:53 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.736 10:43:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:36.736 ************************************ 00:05:36.736 START TEST scheduler_create_thread 00:05:36.736 ************************************ 00:05:36.736 10:43:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:36.736 10:43:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:36.736 10:43:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.736 10:43:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.736 2 00:05:36.736 10:43:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.736 10:43:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:36.736 10:43:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.736 10:43:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.736 3 00:05:36.736 10:43:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.736 10:43:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:36.736 10:43:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.736 10:43:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.736 4 00:05:36.736 10:43:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.736 10:43:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:36.736 10:43:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.736 10:43:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.997 5 00:05:36.997 10:43:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.997 10:43:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:36.997 10:43:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.997 10:43:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.997 6 00:05:36.997 10:43:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.997 10:43:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:36.997 10:43:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.997 10:43:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.997 7 00:05:36.997 10:43:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.997 10:43:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:36.997 10:43:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.997 10:43:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.997 8 00:05:36.997 10:43:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.997 10:43:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:36.997 10:43:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.997 10:43:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.997 9 00:05:36.997 10:43:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.997 10:43:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:36.997 10:43:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.997 10:43:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.259 10 00:05:37.259 10:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.259 10:43:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:37.259 10:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.259 10:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.645 10:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.645 10:43:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:38.645 10:43:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:38.645 10:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.645 10:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.587 10:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.587 10:43:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:39.587 10:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.587 10:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.157 10:43:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.157 10:43:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:40.157 10:43:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:40.157 10:43:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.157 10:43:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.100 10:43:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.100 00:05:41.100 real 0m4.222s 00:05:41.100 user 0m0.024s 00:05:41.100 sys 0m0.005s 00:05:41.100 10:43:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.100 10:43:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.100 ************************************ 00:05:41.100 END TEST scheduler_create_thread 00:05:41.100 ************************************ 00:05:41.100 10:43:57 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:41.100 10:43:57 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:41.100 10:43:57 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1883810 00:05:41.100 10:43:57 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 1883810 ']' 00:05:41.100 10:43:57 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 1883810 00:05:41.100 10:43:57 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:41.100 10:43:57 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:41.100 10:43:57 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1883810 00:05:41.100 10:43:57 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:41.100 10:43:57 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:41.100 10:43:57 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1883810' 00:05:41.100 killing process with pid 1883810 00:05:41.100 10:43:57 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 1883810 00:05:41.100 10:43:57 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 1883810 00:05:41.361 [2024-07-12 10:43:58.279154] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:41.622 00:05:41.622 real 0m5.812s 00:05:41.622 user 0m13.522s 00:05:41.622 sys 0m0.399s 00:05:41.622 10:43:58 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.622 10:43:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:41.622 ************************************ 00:05:41.622 END TEST event_scheduler 00:05:41.622 ************************************ 00:05:41.622 10:43:58 event -- common/autotest_common.sh@1142 -- # return 0 00:05:41.622 10:43:58 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:41.622 10:43:58 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:41.622 10:43:58 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.622 10:43:58 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.622 10:43:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.622 ************************************ 00:05:41.622 START TEST app_repeat 00:05:41.622 ************************************ 00:05:41.622 10:43:58 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:41.622 10:43:58 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.622 10:43:58 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.622 10:43:58 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:41.622 10:43:58 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.622 10:43:58 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:41.622 10:43:58 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:41.622 10:43:58 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:41.622 10:43:58 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1884973 00:05:41.622 10:43:58 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.622 10:43:58 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:41.622 10:43:58 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1884973' 00:05:41.622 Process app_repeat pid: 1884973 00:05:41.622 10:43:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:41.622 10:43:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:41.622 spdk_app_start Round 0 00:05:41.622 10:43:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1884973 /var/tmp/spdk-nbd.sock 00:05:41.622 10:43:58 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1884973 ']' 00:05:41.622 10:43:58 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:41.622 10:43:58 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.622 10:43:58 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:41.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:41.622 10:43:58 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.622 10:43:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:41.622 [2024-07-12 10:43:58.570628] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:41.622 [2024-07-12 10:43:58.570712] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1884973 ] 00:05:41.622 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.880 [2024-07-12 10:43:58.649619] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.880 [2024-07-12 10:43:58.708213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.880 [2024-07-12 10:43:58.708225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.451 10:43:59 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.451 10:43:59 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:42.451 10:43:59 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:42.711 Malloc0 00:05:42.711 10:43:59 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:42.972 Malloc1 00:05:42.972 10:43:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:42.972 10:43:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.972 10:43:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.972 10:43:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:42.972 10:43:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.972 10:43:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:42.972 10:43:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:42.972 10:43:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.972 10:43:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.972 10:43:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:42.972 10:43:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.972 10:43:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:42.972 10:43:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:42.972 10:43:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:42.972 10:43:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.972 10:43:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:42.972 /dev/nbd0 00:05:42.972 10:43:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:42.972 10:43:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:42.972 10:43:59 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:42.972 10:43:59 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:42.972 10:43:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:42.972 10:43:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:42.972 10:43:59 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:42.972 10:43:59 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:42.972 10:43:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:42.972 10:43:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:42.972 10:43:59 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:42.972 1+0 records in 00:05:42.972 1+0 records out 00:05:42.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306123 s, 13.4 MB/s 00:05:42.972 10:43:59 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:42.972 10:43:59 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:42.972 10:43:59 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:42.972 10:43:59 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:42.972 10:43:59 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:42.972 10:43:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:42.972 10:43:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.972 10:43:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:43.233 /dev/nbd1 00:05:43.233 10:44:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:43.233 10:44:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:43.233 10:44:00 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:43.233 10:44:00 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:43.233 10:44:00 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:43.233 10:44:00 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:43.233 10:44:00 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:43.233 10:44:00 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:43.233 10:44:00 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:43.233 10:44:00 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:43.233 10:44:00 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.233 1+0 records in 00:05:43.233 1+0 records out 00:05:43.233 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270924 s, 15.1 MB/s 00:05:43.233 10:44:00 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.233 10:44:00 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:43.233 10:44:00 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.233 10:44:00 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:43.233 10:44:00 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:43.233 10:44:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.233 10:44:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.233 10:44:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.233 10:44:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.233 10:44:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:43.494 { 00:05:43.494 "nbd_device": "/dev/nbd0", 00:05:43.494 "bdev_name": "Malloc0" 00:05:43.494 }, 00:05:43.494 { 00:05:43.494 "nbd_device": "/dev/nbd1", 00:05:43.494 "bdev_name": "Malloc1" 00:05:43.494 } 00:05:43.494 ]' 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:43.494 { 00:05:43.494 "nbd_device": "/dev/nbd0", 00:05:43.494 "bdev_name": "Malloc0" 00:05:43.494 }, 00:05:43.494 { 00:05:43.494 "nbd_device": "/dev/nbd1", 00:05:43.494 "bdev_name": "Malloc1" 00:05:43.494 } 00:05:43.494 ]' 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:43.494 /dev/nbd1' 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:43.494 /dev/nbd1' 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:43.494 256+0 records in 00:05:43.494 256+0 records out 00:05:43.494 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123827 s, 84.7 MB/s 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:43.494 256+0 records in 00:05:43.494 256+0 records out 00:05:43.494 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115875 s, 90.5 MB/s 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:43.494 256+0 records in 00:05:43.494 256+0 records out 00:05:43.494 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012569 s, 83.4 MB/s 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:43.494 10:44:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:43.495 10:44:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.495 10:44:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:43.755 10:44:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:43.755 10:44:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:43.755 10:44:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:43.755 10:44:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.755 10:44:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.755 10:44:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:43.755 10:44:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:43.755 10:44:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.755 10:44:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.755 10:44:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:44.016 10:44:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:44.016 10:44:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:44.016 10:44:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:44.016 10:44:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.016 10:44:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.016 10:44:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:44.016 10:44:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.016 10:44:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.016 10:44:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.016 10:44:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.016 10:44:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.016 10:44:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:44.016 10:44:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:44.016 10:44:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.016 10:44:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:44.016 10:44:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:44.016 10:44:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.016 10:44:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:44.016 10:44:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:44.016 10:44:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:44.016 10:44:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:44.016 10:44:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:44.016 10:44:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:44.016 10:44:00 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:44.276 10:44:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:44.276 [2024-07-12 10:44:01.250994] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.536 [2024-07-12 10:44:01.303611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.536 [2024-07-12 10:44:01.303612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.536 [2024-07-12 10:44:01.332540] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:44.536 [2024-07-12 10:44:01.332569] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:47.833 10:44:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:47.833 10:44:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:47.833 spdk_app_start Round 1 00:05:47.833 10:44:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1884973 /var/tmp/spdk-nbd.sock 00:05:47.833 10:44:04 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1884973 ']' 00:05:47.833 10:44:04 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:47.833 10:44:04 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.833 10:44:04 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:47.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:47.833 10:44:04 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.833 10:44:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:47.833 10:44:04 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.833 10:44:04 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:47.833 10:44:04 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:47.833 Malloc0 00:05:47.833 10:44:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:47.833 Malloc1 00:05:47.833 10:44:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:47.833 10:44:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.833 10:44:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:47.833 10:44:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:47.833 10:44:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.833 10:44:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:47.833 10:44:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:47.833 10:44:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.833 10:44:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:47.833 10:44:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:47.833 10:44:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.833 10:44:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:47.834 10:44:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:47.834 10:44:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:47.834 10:44:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.834 10:44:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:47.834 /dev/nbd0 00:05:47.834 10:44:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:47.834 10:44:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:47.834 10:44:04 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:47.834 10:44:04 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:47.834 10:44:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:47.834 10:44:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:47.834 10:44:04 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:47.834 10:44:04 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:47.834 10:44:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:47.834 10:44:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:47.834 10:44:04 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:47.834 1+0 records in 00:05:47.834 1+0 records out 00:05:47.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273741 s, 15.0 MB/s 00:05:47.834 10:44:04 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:47.834 10:44:04 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:47.834 10:44:04 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:47.834 10:44:04 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:47.834 10:44:04 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:47.834 10:44:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:47.834 10:44:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.834 10:44:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:48.099 /dev/nbd1 00:05:48.099 10:44:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:48.099 10:44:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:48.099 10:44:04 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:48.099 10:44:04 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:48.099 10:44:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:48.099 10:44:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:48.099 10:44:04 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:48.099 10:44:04 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:48.099 10:44:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:48.099 10:44:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:48.099 10:44:04 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.099 1+0 records in 00:05:48.099 1+0 records out 00:05:48.099 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294019 s, 13.9 MB/s 00:05:48.099 10:44:04 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.099 10:44:04 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:48.099 10:44:04 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.099 10:44:04 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:48.099 10:44:04 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:48.099 10:44:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.099 10:44:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.099 10:44:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:48.099 10:44:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.099 10:44:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.360 10:44:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:48.360 { 00:05:48.360 "nbd_device": "/dev/nbd0", 00:05:48.360 "bdev_name": "Malloc0" 00:05:48.360 }, 00:05:48.360 { 00:05:48.360 "nbd_device": "/dev/nbd1", 00:05:48.360 "bdev_name": "Malloc1" 00:05:48.360 } 00:05:48.360 ]' 00:05:48.360 10:44:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:48.360 { 00:05:48.360 "nbd_device": "/dev/nbd0", 00:05:48.360 "bdev_name": "Malloc0" 00:05:48.360 }, 00:05:48.360 { 00:05:48.360 "nbd_device": "/dev/nbd1", 00:05:48.360 "bdev_name": "Malloc1" 00:05:48.360 } 00:05:48.360 ]' 00:05:48.360 10:44:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.360 10:44:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:48.360 /dev/nbd1' 00:05:48.360 10:44:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:48.360 /dev/nbd1' 00:05:48.360 10:44:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.360 10:44:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:48.360 10:44:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:48.360 10:44:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:48.360 10:44:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:48.360 10:44:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:48.360 10:44:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.360 10:44:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.360 10:44:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:48.360 10:44:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:48.360 10:44:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:48.360 10:44:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:48.360 256+0 records in 00:05:48.360 256+0 records out 00:05:48.360 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118314 s, 88.6 MB/s 00:05:48.360 10:44:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.360 10:44:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:48.360 256+0 records in 00:05:48.361 256+0 records out 00:05:48.361 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117769 s, 89.0 MB/s 00:05:48.361 10:44:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.361 10:44:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:48.361 256+0 records in 00:05:48.361 256+0 records out 00:05:48.361 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125213 s, 83.7 MB/s 00:05:48.361 10:44:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:48.361 10:44:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.361 10:44:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.361 10:44:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:48.361 10:44:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:48.361 10:44:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:48.361 10:44:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:48.361 10:44:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.361 10:44:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:48.361 10:44:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.361 10:44:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:48.361 10:44:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:48.361 10:44:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:48.361 10:44:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.361 10:44:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.361 10:44:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:48.361 10:44:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:48.361 10:44:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:48.361 10:44:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:48.621 10:44:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:48.621 10:44:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:48.621 10:44:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:48.621 10:44:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:48.621 10:44:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:48.621 10:44:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:48.621 10:44:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:48.621 10:44:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:48.621 10:44:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:48.621 10:44:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:48.882 10:44:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:48.882 10:44:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:48.882 10:44:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:48.882 10:44:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:48.882 10:44:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:48.882 10:44:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:48.882 10:44:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:48.882 10:44:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:48.882 10:44:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:48.882 10:44:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.882 10:44:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.882 10:44:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:48.882 10:44:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:48.882 10:44:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.882 10:44:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:48.882 10:44:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:48.882 10:44:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.882 10:44:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:48.882 10:44:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:48.882 10:44:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:48.882 10:44:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:48.882 10:44:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:48.882 10:44:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:48.882 10:44:05 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:49.142 10:44:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:49.142 [2024-07-12 10:44:06.125540] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.403 [2024-07-12 10:44:06.177159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.403 [2024-07-12 10:44:06.177188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.403 [2024-07-12 10:44:06.206754] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:49.403 [2024-07-12 10:44:06.206786] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:52.701 10:44:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:52.701 10:44:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:52.701 spdk_app_start Round 2 00:05:52.701 10:44:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1884973 /var/tmp/spdk-nbd.sock 00:05:52.701 10:44:09 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1884973 ']' 00:05:52.701 10:44:09 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:52.701 10:44:09 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.701 10:44:09 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:52.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:52.701 10:44:09 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.701 10:44:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:52.701 10:44:09 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.701 10:44:09 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:52.701 10:44:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:52.701 Malloc0 00:05:52.701 10:44:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:52.701 Malloc1 00:05:52.701 10:44:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.701 10:44:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.701 10:44:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.701 10:44:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:52.701 10:44:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.701 10:44:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:52.701 10:44:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.701 10:44:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.701 10:44:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.701 10:44:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:52.701 10:44:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.701 10:44:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:52.701 10:44:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:52.701 10:44:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:52.701 10:44:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.701 10:44:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:52.962 /dev/nbd0 00:05:52.962 10:44:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:52.962 10:44:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:52.962 10:44:09 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:52.962 10:44:09 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:52.962 10:44:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:52.962 10:44:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:52.962 10:44:09 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:52.962 10:44:09 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:52.962 10:44:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:52.962 10:44:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:52.962 10:44:09 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.962 1+0 records in 00:05:52.962 1+0 records out 00:05:52.962 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234508 s, 17.5 MB/s 00:05:52.962 10:44:09 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:52.962 10:44:09 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:52.962 10:44:09 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:52.962 10:44:09 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:52.962 10:44:09 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:52.962 10:44:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.962 10:44:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.962 10:44:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:52.962 /dev/nbd1 00:05:52.962 10:44:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:52.962 10:44:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:52.962 10:44:09 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:52.962 10:44:09 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:52.962 10:44:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:52.962 10:44:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:52.962 10:44:09 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:52.962 10:44:09 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:52.962 10:44:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:52.962 10:44:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:52.962 10:44:09 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.962 1+0 records in 00:05:52.962 1+0 records out 00:05:52.962 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272116 s, 15.1 MB/s 00:05:52.963 10:44:09 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:52.963 10:44:09 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:52.963 10:44:09 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:52.963 10:44:09 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:52.963 10:44:09 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:52.963 10:44:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.963 10:44:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.963 10:44:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.963 10:44:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.963 10:44:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.224 10:44:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:53.224 { 00:05:53.224 "nbd_device": "/dev/nbd0", 00:05:53.224 "bdev_name": "Malloc0" 00:05:53.224 }, 00:05:53.224 { 00:05:53.224 "nbd_device": "/dev/nbd1", 00:05:53.224 "bdev_name": "Malloc1" 00:05:53.224 } 00:05:53.224 ]' 00:05:53.224 10:44:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:53.224 { 00:05:53.224 "nbd_device": "/dev/nbd0", 00:05:53.224 "bdev_name": "Malloc0" 00:05:53.224 }, 00:05:53.224 { 00:05:53.224 "nbd_device": "/dev/nbd1", 00:05:53.224 "bdev_name": "Malloc1" 00:05:53.224 } 00:05:53.224 ]' 00:05:53.224 10:44:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.224 10:44:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:53.224 /dev/nbd1' 00:05:53.224 10:44:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:53.224 /dev/nbd1' 00:05:53.224 10:44:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.224 10:44:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:53.224 10:44:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:53.224 10:44:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:53.224 10:44:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:53.224 10:44:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:53.224 10:44:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.224 10:44:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.224 10:44:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:53.224 10:44:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:53.224 10:44:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:53.224 10:44:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:53.224 256+0 records in 00:05:53.224 256+0 records out 00:05:53.224 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124511 s, 84.2 MB/s 00:05:53.224 10:44:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.224 10:44:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:53.224 256+0 records in 00:05:53.224 256+0 records out 00:05:53.224 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119686 s, 87.6 MB/s 00:05:53.224 10:44:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.224 10:44:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:53.485 256+0 records in 00:05:53.485 256+0 records out 00:05:53.485 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125106 s, 83.8 MB/s 00:05:53.485 10:44:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:53.485 10:44:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.485 10:44:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.485 10:44:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:53.485 10:44:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:53.485 10:44:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:53.485 10:44:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:53.485 10:44:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.485 10:44:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:53.485 10:44:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.485 10:44:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:53.485 10:44:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:53.485 10:44:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:53.485 10:44:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.485 10:44:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.485 10:44:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:53.485 10:44:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:53.485 10:44:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.485 10:44:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:53.485 10:44:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:53.485 10:44:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:53.485 10:44:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:53.485 10:44:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.485 10:44:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.485 10:44:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:53.485 10:44:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.485 10:44:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.485 10:44:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.485 10:44:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:53.746 10:44:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:53.746 10:44:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:53.746 10:44:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:53.746 10:44:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.746 10:44:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.746 10:44:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:53.746 10:44:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.746 10:44:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.746 10:44:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.746 10:44:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.746 10:44:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.007 10:44:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:54.007 10:44:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:54.007 10:44:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.007 10:44:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:54.007 10:44:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:54.007 10:44:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.007 10:44:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:54.007 10:44:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:54.007 10:44:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:54.007 10:44:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:54.007 10:44:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:54.007 10:44:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:54.007 10:44:10 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:54.007 10:44:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:54.267 [2024-07-12 10:44:11.090794] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.267 [2024-07-12 10:44:11.142256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.267 [2024-07-12 10:44:11.142257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.268 [2024-07-12 10:44:11.171089] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:54.268 [2024-07-12 10:44:11.171120] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:57.571 10:44:13 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1884973 /var/tmp/spdk-nbd.sock 00:05:57.571 10:44:13 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1884973 ']' 00:05:57.571 10:44:13 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:57.571 10:44:13 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.571 10:44:13 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:57.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:57.571 10:44:13 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.571 10:44:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:57.571 10:44:14 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.571 10:44:14 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:57.571 10:44:14 event.app_repeat -- event/event.sh@39 -- # killprocess 1884973 00:05:57.571 10:44:14 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 1884973 ']' 00:05:57.571 10:44:14 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 1884973 00:05:57.571 10:44:14 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:57.571 10:44:14 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:57.571 10:44:14 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1884973 00:05:57.571 10:44:14 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:57.571 10:44:14 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:57.571 10:44:14 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1884973' 00:05:57.571 killing process with pid 1884973 00:05:57.571 10:44:14 event.app_repeat -- common/autotest_common.sh@967 -- # kill 1884973 00:05:57.571 10:44:14 event.app_repeat -- common/autotest_common.sh@972 -- # wait 1884973 00:05:57.571 spdk_app_start is called in Round 0. 00:05:57.571 Shutdown signal received, stop current app iteration 00:05:57.571 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:05:57.571 spdk_app_start is called in Round 1. 00:05:57.571 Shutdown signal received, stop current app iteration 00:05:57.572 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:05:57.572 spdk_app_start is called in Round 2. 00:05:57.572 Shutdown signal received, stop current app iteration 00:05:57.572 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:05:57.572 spdk_app_start is called in Round 3. 00:05:57.572 Shutdown signal received, stop current app iteration 00:05:57.572 10:44:14 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:57.572 10:44:14 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:57.572 00:05:57.572 real 0m15.777s 00:05:57.572 user 0m34.276s 00:05:57.572 sys 0m2.117s 00:05:57.572 10:44:14 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.572 10:44:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:57.572 ************************************ 00:05:57.572 END TEST app_repeat 00:05:57.572 ************************************ 00:05:57.572 10:44:14 event -- common/autotest_common.sh@1142 -- # return 0 00:05:57.572 10:44:14 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:57.572 10:44:14 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:57.572 10:44:14 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.572 10:44:14 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.572 10:44:14 event -- common/autotest_common.sh@10 -- # set +x 00:05:57.572 ************************************ 00:05:57.572 START TEST cpu_locks 00:05:57.572 ************************************ 00:05:57.572 10:44:14 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:57.572 * Looking for test storage... 00:05:57.572 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:57.572 10:44:14 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:57.572 10:44:14 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:57.572 10:44:14 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:57.572 10:44:14 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:57.572 10:44:14 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.572 10:44:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.572 10:44:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.572 ************************************ 00:05:57.572 START TEST default_locks 00:05:57.572 ************************************ 00:05:57.572 10:44:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:57.572 10:44:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1888383 00:05:57.572 10:44:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1888383 00:05:57.572 10:44:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.572 10:44:14 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1888383 ']' 00:05:57.572 10:44:14 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.572 10:44:14 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.572 10:44:14 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.572 10:44:14 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.572 10:44:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.833 [2024-07-12 10:44:14.588925] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:57.833 [2024-07-12 10:44:14.588987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1888383 ] 00:05:57.833 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.833 [2024-07-12 10:44:14.670424] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.833 [2024-07-12 10:44:14.740969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.406 10:44:15 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.406 10:44:15 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:58.406 10:44:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1888383 00:05:58.406 10:44:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1888383 00:05:58.406 10:44:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.978 lslocks: write error 00:05:58.978 10:44:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1888383 00:05:58.978 10:44:15 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 1888383 ']' 00:05:58.978 10:44:15 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 1888383 00:05:58.978 10:44:15 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:58.978 10:44:15 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:58.978 10:44:15 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1888383 00:05:58.978 10:44:15 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:58.978 10:44:15 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:58.978 10:44:15 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1888383' 00:05:58.978 killing process with pid 1888383 00:05:58.978 10:44:15 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 1888383 00:05:58.978 10:44:15 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 1888383 00:05:59.240 10:44:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1888383 00:05:59.240 10:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:59.240 10:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1888383 00:05:59.240 10:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:59.240 10:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.240 10:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:59.240 10:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.240 10:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1888383 00:05:59.240 10:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1888383 ']' 00:05:59.240 10:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.240 10:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.240 10:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.240 10:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.240 10:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1888383) - No such process 00:05:59.240 ERROR: process (pid: 1888383) is no longer running 00:05:59.240 10:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.240 10:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:59.240 10:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:59.240 10:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:59.240 10:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:59.240 10:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:59.240 10:44:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:59.240 10:44:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:59.240 10:44:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:59.240 10:44:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:59.240 00:05:59.240 real 0m1.546s 00:05:59.240 user 0m1.635s 00:05:59.240 sys 0m0.538s 00:05:59.240 10:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.240 10:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.240 ************************************ 00:05:59.240 END TEST default_locks 00:05:59.240 ************************************ 00:05:59.240 10:44:16 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:59.240 10:44:16 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:59.240 10:44:16 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.240 10:44:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.240 10:44:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.240 ************************************ 00:05:59.240 START TEST default_locks_via_rpc 00:05:59.240 ************************************ 00:05:59.240 10:44:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:59.240 10:44:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1888709 00:05:59.240 10:44:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1888709 00:05:59.240 10:44:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.240 10:44:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1888709 ']' 00:05:59.240 10:44:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.240 10:44:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.240 10:44:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.240 10:44:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.240 10:44:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.240 [2024-07-12 10:44:16.214022] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:59.240 [2024-07-12 10:44:16.214081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1888709 ] 00:05:59.501 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.501 [2024-07-12 10:44:16.291856] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.501 [2024-07-12 10:44:16.366612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.071 10:44:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.071 10:44:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:00.071 10:44:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:00.071 10:44:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.071 10:44:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.071 10:44:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.071 10:44:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:00.071 10:44:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:00.071 10:44:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:00.071 10:44:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:00.071 10:44:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:00.071 10:44:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.071 10:44:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.071 10:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.071 10:44:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1888709 00:06:00.071 10:44:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1888709 00:06:00.071 10:44:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.704 10:44:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1888709 00:06:00.704 10:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 1888709 ']' 00:06:00.704 10:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 1888709 00:06:00.704 10:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:00.704 10:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:00.704 10:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1888709 00:06:00.704 10:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:00.704 10:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:00.704 10:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1888709' 00:06:00.704 killing process with pid 1888709 00:06:00.705 10:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 1888709 00:06:00.705 10:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 1888709 00:06:00.705 00:06:00.705 real 0m1.478s 00:06:00.705 user 0m1.561s 00:06:00.705 sys 0m0.522s 00:06:00.705 10:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.705 10:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.705 ************************************ 00:06:00.705 END TEST default_locks_via_rpc 00:06:00.705 ************************************ 00:06:00.705 10:44:17 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:00.705 10:44:17 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:00.705 10:44:17 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:00.705 10:44:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.705 10:44:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.986 ************************************ 00:06:00.986 START TEST non_locking_app_on_locked_coremask 00:06:00.986 ************************************ 00:06:00.986 10:44:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:00.986 10:44:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1889000 00:06:00.986 10:44:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1889000 /var/tmp/spdk.sock 00:06:00.986 10:44:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.986 10:44:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1889000 ']' 00:06:00.986 10:44:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.986 10:44:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.986 10:44:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.986 10:44:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.986 10:44:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.986 [2024-07-12 10:44:17.758581] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:00.986 [2024-07-12 10:44:17.758642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1889000 ] 00:06:00.986 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.986 [2024-07-12 10:44:17.837199] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.986 [2024-07-12 10:44:17.897214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.557 10:44:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.557 10:44:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:01.557 10:44:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1889287 00:06:01.557 10:44:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1889287 /var/tmp/spdk2.sock 00:06:01.557 10:44:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1889287 ']' 00:06:01.557 10:44:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:01.557 10:44:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.557 10:44:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.557 10:44:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.557 10:44:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.557 10:44:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.818 [2024-07-12 10:44:18.574008] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:01.818 [2024-07-12 10:44:18.574058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1889287 ] 00:06:01.818 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.818 [2024-07-12 10:44:18.643272] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:01.818 [2024-07-12 10:44:18.643293] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.818 [2024-07-12 10:44:18.749724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.390 10:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.390 10:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:02.390 10:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1889000 00:06:02.390 10:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1889000 00:06:02.390 10:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:02.962 lslocks: write error 00:06:02.962 10:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1889000 00:06:02.962 10:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1889000 ']' 00:06:02.962 10:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1889000 00:06:02.962 10:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:02.962 10:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:02.962 10:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1889000 00:06:02.962 10:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:02.962 10:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:02.962 10:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1889000' 00:06:02.962 killing process with pid 1889000 00:06:02.962 10:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1889000 00:06:02.962 10:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1889000 00:06:03.223 10:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1889287 00:06:03.223 10:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1889287 ']' 00:06:03.223 10:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1889287 00:06:03.223 10:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:03.223 10:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:03.223 10:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1889287 00:06:03.223 10:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:03.223 10:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:03.223 10:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1889287' 00:06:03.223 killing process with pid 1889287 00:06:03.223 10:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1889287 00:06:03.223 10:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1889287 00:06:03.485 00:06:03.485 real 0m2.685s 00:06:03.485 user 0m2.947s 00:06:03.485 sys 0m0.805s 00:06:03.485 10:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.485 10:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.485 ************************************ 00:06:03.485 END TEST non_locking_app_on_locked_coremask 00:06:03.485 ************************************ 00:06:03.485 10:44:20 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:03.485 10:44:20 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:03.485 10:44:20 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.485 10:44:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.485 10:44:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.485 ************************************ 00:06:03.485 START TEST locking_app_on_unlocked_coremask 00:06:03.485 ************************************ 00:06:03.485 10:44:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:03.485 10:44:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1889666 00:06:03.485 10:44:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1889666 /var/tmp/spdk.sock 00:06:03.485 10:44:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:03.485 10:44:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1889666 ']' 00:06:03.485 10:44:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.485 10:44:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.485 10:44:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.485 10:44:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.485 10:44:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.745 [2024-07-12 10:44:20.518829] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:03.745 [2024-07-12 10:44:20.518885] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1889666 ] 00:06:03.745 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.745 [2024-07-12 10:44:20.592897] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:03.745 [2024-07-12 10:44:20.592921] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.745 [2024-07-12 10:44:20.651543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.316 10:44:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.316 10:44:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:04.316 10:44:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1889821 00:06:04.316 10:44:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1889821 /var/tmp/spdk2.sock 00:06:04.316 10:44:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1889821 ']' 00:06:04.316 10:44:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:04.316 10:44:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.316 10:44:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.316 10:44:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.316 10:44:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.316 10:44:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.576 [2024-07-12 10:44:21.326631] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:04.576 [2024-07-12 10:44:21.326683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1889821 ] 00:06:04.576 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.576 [2024-07-12 10:44:21.397119] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.576 [2024-07-12 10:44:21.507384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.147 10:44:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.147 10:44:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:05.147 10:44:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1889821 00:06:05.147 10:44:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1889821 00:06:05.147 10:44:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.718 lslocks: write error 00:06:05.719 10:44:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1889666 00:06:05.719 10:44:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1889666 ']' 00:06:05.719 10:44:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1889666 00:06:05.719 10:44:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:05.719 10:44:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:05.719 10:44:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1889666 00:06:05.719 10:44:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:05.719 10:44:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:05.719 10:44:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1889666' 00:06:05.719 killing process with pid 1889666 00:06:05.719 10:44:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1889666 00:06:05.719 10:44:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1889666 00:06:05.979 10:44:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1889821 00:06:05.979 10:44:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1889821 ']' 00:06:05.979 10:44:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1889821 00:06:06.238 10:44:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:06.238 10:44:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:06.238 10:44:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1889821 00:06:06.238 10:44:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:06.238 10:44:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:06.238 10:44:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1889821' 00:06:06.238 killing process with pid 1889821 00:06:06.238 10:44:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1889821 00:06:06.238 10:44:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1889821 00:06:06.238 00:06:06.238 real 0m2.742s 00:06:06.238 user 0m2.993s 00:06:06.238 sys 0m0.831s 00:06:06.239 10:44:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.239 10:44:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.239 ************************************ 00:06:06.239 END TEST locking_app_on_unlocked_coremask 00:06:06.239 ************************************ 00:06:06.497 10:44:23 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:06.497 10:44:23 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:06.497 10:44:23 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.497 10:44:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.497 10:44:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.497 ************************************ 00:06:06.497 START TEST locking_app_on_locked_coremask 00:06:06.497 ************************************ 00:06:06.497 10:44:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:06.497 10:44:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1890349 00:06:06.497 10:44:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1890349 /var/tmp/spdk.sock 00:06:06.497 10:44:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:06.497 10:44:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1890349 ']' 00:06:06.497 10:44:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.497 10:44:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.497 10:44:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.497 10:44:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.497 10:44:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.497 [2024-07-12 10:44:23.337449] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:06.497 [2024-07-12 10:44:23.337500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1890349 ] 00:06:06.497 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.497 [2024-07-12 10:44:23.413634] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.498 [2024-07-12 10:44:23.470320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.438 10:44:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.438 10:44:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:07.438 10:44:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:07.438 10:44:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1890383 00:06:07.438 10:44:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1890383 /var/tmp/spdk2.sock 00:06:07.438 10:44:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:07.438 10:44:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1890383 /var/tmp/spdk2.sock 00:06:07.438 10:44:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:07.438 10:44:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.438 10:44:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:07.438 10:44:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.438 10:44:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1890383 /var/tmp/spdk2.sock 00:06:07.438 10:44:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1890383 ']' 00:06:07.438 10:44:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.438 10:44:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.438 10:44:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.438 10:44:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.438 10:44:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.438 [2024-07-12 10:44:24.129072] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:07.438 [2024-07-12 10:44:24.129121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1890383 ] 00:06:07.438 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.438 [2024-07-12 10:44:24.197053] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1890349 has claimed it. 00:06:07.438 [2024-07-12 10:44:24.197082] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:08.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1890383) - No such process 00:06:08.010 ERROR: process (pid: 1890383) is no longer running 00:06:08.010 10:44:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.010 10:44:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:08.010 10:44:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:08.010 10:44:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:08.010 10:44:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:08.010 10:44:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:08.010 10:44:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1890349 00:06:08.010 10:44:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1890349 00:06:08.010 10:44:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:08.581 lslocks: write error 00:06:08.581 10:44:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1890349 00:06:08.581 10:44:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1890349 ']' 00:06:08.581 10:44:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1890349 00:06:08.581 10:44:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:08.581 10:44:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:08.581 10:44:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1890349 00:06:08.581 10:44:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:08.581 10:44:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:08.581 10:44:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1890349' 00:06:08.581 killing process with pid 1890349 00:06:08.581 10:44:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1890349 00:06:08.581 10:44:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1890349 00:06:08.581 00:06:08.581 real 0m2.259s 00:06:08.581 user 0m2.494s 00:06:08.581 sys 0m0.631s 00:06:08.581 10:44:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.581 10:44:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.581 ************************************ 00:06:08.581 END TEST locking_app_on_locked_coremask 00:06:08.581 ************************************ 00:06:08.842 10:44:25 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:08.842 10:44:25 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:08.842 10:44:25 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.842 10:44:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.842 10:44:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.842 ************************************ 00:06:08.842 START TEST locking_overlapped_coremask 00:06:08.842 ************************************ 00:06:08.842 10:44:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:08.842 10:44:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1890745 00:06:08.842 10:44:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1890745 /var/tmp/spdk.sock 00:06:08.842 10:44:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:08.842 10:44:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1890745 ']' 00:06:08.842 10:44:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.842 10:44:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.842 10:44:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.842 10:44:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.842 10:44:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.842 [2024-07-12 10:44:25.679550] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:08.842 [2024-07-12 10:44:25.679607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1890745 ] 00:06:08.842 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.842 [2024-07-12 10:44:25.754341] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:08.842 [2024-07-12 10:44:25.813662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.842 [2024-07-12 10:44:25.813815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.842 [2024-07-12 10:44:25.813816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.784 10:44:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.784 10:44:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:09.784 10:44:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1890931 00:06:09.784 10:44:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1890931 /var/tmp/spdk2.sock 00:06:09.784 10:44:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:09.784 10:44:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:09.784 10:44:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1890931 /var/tmp/spdk2.sock 00:06:09.784 10:44:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:09.784 10:44:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:09.784 10:44:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:09.784 10:44:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:09.784 10:44:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1890931 /var/tmp/spdk2.sock 00:06:09.784 10:44:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1890931 ']' 00:06:09.784 10:44:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.784 10:44:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.784 10:44:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.784 10:44:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.784 10:44:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.784 [2024-07-12 10:44:26.498939] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:09.784 [2024-07-12 10:44:26.498993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1890931 ] 00:06:09.784 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.784 [2024-07-12 10:44:26.587304] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1890745 has claimed it. 00:06:09.784 [2024-07-12 10:44:26.587343] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:10.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1890931) - No such process 00:06:10.355 ERROR: process (pid: 1890931) is no longer running 00:06:10.355 10:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.355 10:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:10.355 10:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:10.355 10:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:10.355 10:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:10.355 10:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:10.355 10:44:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:10.355 10:44:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:10.355 10:44:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:10.355 10:44:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:10.355 10:44:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1890745 00:06:10.355 10:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 1890745 ']' 00:06:10.355 10:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 1890745 00:06:10.355 10:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:10.355 10:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:10.355 10:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1890745 00:06:10.355 10:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:10.355 10:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:10.355 10:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1890745' 00:06:10.355 killing process with pid 1890745 00:06:10.355 10:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 1890745 00:06:10.355 10:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 1890745 00:06:10.616 00:06:10.616 real 0m1.732s 00:06:10.616 user 0m4.900s 00:06:10.616 sys 0m0.381s 00:06:10.617 10:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.617 10:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.617 ************************************ 00:06:10.617 END TEST locking_overlapped_coremask 00:06:10.617 ************************************ 00:06:10.617 10:44:27 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:10.617 10:44:27 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:10.617 10:44:27 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.617 10:44:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.617 10:44:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.617 ************************************ 00:06:10.617 START TEST locking_overlapped_coremask_via_rpc 00:06:10.617 ************************************ 00:06:10.617 10:44:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:10.617 10:44:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1891122 00:06:10.617 10:44:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1891122 /var/tmp/spdk.sock 00:06:10.617 10:44:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:10.617 10:44:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1891122 ']' 00:06:10.617 10:44:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.617 10:44:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.617 10:44:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.617 10:44:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.617 10:44:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.617 [2024-07-12 10:44:27.475408] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:10.617 [2024-07-12 10:44:27.475456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1891122 ] 00:06:10.617 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.617 [2024-07-12 10:44:27.552047] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:10.617 [2024-07-12 10:44:27.552079] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.878 [2024-07-12 10:44:27.609764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.878 [2024-07-12 10:44:27.609915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.878 [2024-07-12 10:44:27.609916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.448 10:44:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.448 10:44:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:11.448 10:44:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1891383 00:06:11.448 10:44:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1891383 /var/tmp/spdk2.sock 00:06:11.449 10:44:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1891383 ']' 00:06:11.449 10:44:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:11.449 10:44:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.449 10:44:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.449 10:44:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.449 10:44:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.449 10:44:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.449 [2024-07-12 10:44:28.298781] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:11.449 [2024-07-12 10:44:28.298835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1891383 ] 00:06:11.449 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.449 [2024-07-12 10:44:28.387582] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:11.449 [2024-07-12 10:44:28.387612] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:11.709 [2024-07-12 10:44:28.520379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:11.709 [2024-07-12 10:44:28.520537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.709 [2024-07-12 10:44:28.520539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:12.279 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.279 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:12.279 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:12.279 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.279 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.279 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.279 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:12.279 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:12.279 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:12.279 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:12.279 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.279 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:12.279 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.279 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:12.279 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.279 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.279 [2024-07-12 10:44:29.073202] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1891122 has claimed it. 00:06:12.279 request: 00:06:12.279 { 00:06:12.279 "method": "framework_enable_cpumask_locks", 00:06:12.279 "req_id": 1 00:06:12.279 } 00:06:12.279 Got JSON-RPC error response 00:06:12.279 response: 00:06:12.279 { 00:06:12.279 "code": -32603, 00:06:12.279 "message": "Failed to claim CPU core: 2" 00:06:12.279 } 00:06:12.279 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:12.279 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:12.279 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:12.279 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:12.279 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:12.279 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1891122 /var/tmp/spdk.sock 00:06:12.279 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1891122 ']' 00:06:12.279 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.279 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.279 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.279 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.279 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.279 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.279 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:12.280 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1891383 /var/tmp/spdk2.sock 00:06:12.280 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1891383 ']' 00:06:12.280 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.280 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.280 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.280 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.280 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.539 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.539 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:12.540 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:12.540 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:12.540 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:12.540 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:12.540 00:06:12.540 real 0m1.994s 00:06:12.540 user 0m0.786s 00:06:12.540 sys 0m0.135s 00:06:12.540 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.540 10:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.540 ************************************ 00:06:12.540 END TEST locking_overlapped_coremask_via_rpc 00:06:12.540 ************************************ 00:06:12.540 10:44:29 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:12.540 10:44:29 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:12.540 10:44:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1891122 ]] 00:06:12.540 10:44:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1891122 00:06:12.540 10:44:29 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1891122 ']' 00:06:12.540 10:44:29 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1891122 00:06:12.540 10:44:29 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:12.540 10:44:29 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.540 10:44:29 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1891122 00:06:12.540 10:44:29 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:12.540 10:44:29 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:12.540 10:44:29 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1891122' 00:06:12.540 killing process with pid 1891122 00:06:12.540 10:44:29 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1891122 00:06:12.540 10:44:29 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1891122 00:06:12.801 10:44:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1891383 ]] 00:06:12.801 10:44:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1891383 00:06:12.801 10:44:29 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1891383 ']' 00:06:12.801 10:44:29 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1891383 00:06:12.801 10:44:29 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:12.801 10:44:29 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.801 10:44:29 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1891383 00:06:12.801 10:44:29 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:12.801 10:44:29 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:12.801 10:44:29 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1891383' 00:06:12.801 killing process with pid 1891383 00:06:12.801 10:44:29 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1891383 00:06:12.801 10:44:29 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1891383 00:06:13.062 10:44:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:13.062 10:44:29 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:13.062 10:44:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1891122 ]] 00:06:13.062 10:44:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1891122 00:06:13.062 10:44:29 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1891122 ']' 00:06:13.062 10:44:29 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1891122 00:06:13.062 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1891122) - No such process 00:06:13.062 10:44:29 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1891122 is not found' 00:06:13.062 Process with pid 1891122 is not found 00:06:13.062 10:44:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1891383 ]] 00:06:13.062 10:44:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1891383 00:06:13.062 10:44:29 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1891383 ']' 00:06:13.062 10:44:29 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1891383 00:06:13.062 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1891383) - No such process 00:06:13.062 10:44:29 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1891383 is not found' 00:06:13.062 Process with pid 1891383 is not found 00:06:13.062 10:44:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:13.062 00:06:13.062 real 0m15.568s 00:06:13.062 user 0m26.670s 00:06:13.062 sys 0m4.721s 00:06:13.062 10:44:29 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.062 10:44:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.062 ************************************ 00:06:13.062 END TEST cpu_locks 00:06:13.062 ************************************ 00:06:13.062 10:44:29 event -- common/autotest_common.sh@1142 -- # return 0 00:06:13.062 00:06:13.062 real 0m41.314s 00:06:13.062 user 1m21.014s 00:06:13.062 sys 0m7.859s 00:06:13.062 10:44:29 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.062 10:44:29 event -- common/autotest_common.sh@10 -- # set +x 00:06:13.062 ************************************ 00:06:13.062 END TEST event 00:06:13.062 ************************************ 00:06:13.062 10:44:30 -- common/autotest_common.sh@1142 -- # return 0 00:06:13.062 10:44:30 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:13.062 10:44:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.062 10:44:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.062 10:44:30 -- common/autotest_common.sh@10 -- # set +x 00:06:13.322 ************************************ 00:06:13.322 START TEST thread 00:06:13.322 ************************************ 00:06:13.322 10:44:30 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:13.322 * Looking for test storage... 00:06:13.322 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:13.322 10:44:30 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:13.323 10:44:30 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:13.323 10:44:30 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.323 10:44:30 thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.323 ************************************ 00:06:13.323 START TEST thread_poller_perf 00:06:13.323 ************************************ 00:06:13.323 10:44:30 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:13.323 [2024-07-12 10:44:30.229841] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:13.323 [2024-07-12 10:44:30.229942] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1891893 ] 00:06:13.323 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.583 [2024-07-12 10:44:30.312000] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.583 [2024-07-12 10:44:30.374263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.583 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:14.522 ====================================== 00:06:14.522 busy:2407895666 (cyc) 00:06:14.522 total_run_count: 418000 00:06:14.522 tsc_hz: 2400000000 (cyc) 00:06:14.522 ====================================== 00:06:14.522 poller_cost: 5760 (cyc), 2400 (nsec) 00:06:14.522 00:06:14.522 real 0m1.216s 00:06:14.522 user 0m1.130s 00:06:14.522 sys 0m0.083s 00:06:14.522 10:44:31 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.522 10:44:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:14.522 ************************************ 00:06:14.522 END TEST thread_poller_perf 00:06:14.522 ************************************ 00:06:14.522 10:44:31 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:14.523 10:44:31 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:14.523 10:44:31 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:14.523 10:44:31 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.523 10:44:31 thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.523 ************************************ 00:06:14.523 START TEST thread_poller_perf 00:06:14.523 ************************************ 00:06:14.523 10:44:31 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:14.782 [2024-07-12 10:44:31.522171] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:14.782 [2024-07-12 10:44:31.522275] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1892107 ] 00:06:14.782 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.782 [2024-07-12 10:44:31.604350] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.782 [2024-07-12 10:44:31.673927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.782 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:16.166 ====================================== 00:06:16.166 busy:2401454816 (cyc) 00:06:16.166 total_run_count: 5557000 00:06:16.166 tsc_hz: 2400000000 (cyc) 00:06:16.166 ====================================== 00:06:16.166 poller_cost: 432 (cyc), 180 (nsec) 00:06:16.166 00:06:16.166 real 0m1.216s 00:06:16.166 user 0m1.127s 00:06:16.166 sys 0m0.085s 00:06:16.166 10:44:32 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.166 10:44:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:16.166 ************************************ 00:06:16.166 END TEST thread_poller_perf 00:06:16.166 ************************************ 00:06:16.166 10:44:32 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:16.166 10:44:32 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:16.166 00:06:16.166 real 0m2.684s 00:06:16.166 user 0m2.357s 00:06:16.166 sys 0m0.335s 00:06:16.166 10:44:32 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.166 10:44:32 thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.166 ************************************ 00:06:16.166 END TEST thread 00:06:16.166 ************************************ 00:06:16.166 10:44:32 -- common/autotest_common.sh@1142 -- # return 0 00:06:16.166 10:44:32 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:16.166 10:44:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.166 10:44:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.166 10:44:32 -- common/autotest_common.sh@10 -- # set +x 00:06:16.166 ************************************ 00:06:16.166 START TEST accel 00:06:16.166 ************************************ 00:06:16.166 10:44:32 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:16.166 * Looking for test storage... 00:06:16.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:16.166 10:44:32 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:16.166 10:44:32 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:16.166 10:44:32 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:16.166 10:44:32 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1892371 00:06:16.166 10:44:32 accel -- accel/accel.sh@63 -- # waitforlisten 1892371 00:06:16.166 10:44:32 accel -- common/autotest_common.sh@829 -- # '[' -z 1892371 ']' 00:06:16.166 10:44:32 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.166 10:44:32 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.166 10:44:32 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.166 10:44:32 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:16.166 10:44:32 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.166 10:44:32 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:16.166 10:44:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.166 10:44:32 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.166 10:44:32 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.166 10:44:32 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.166 10:44:32 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.166 10:44:32 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.166 10:44:32 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:16.166 10:44:32 accel -- accel/accel.sh@41 -- # jq -r . 00:06:16.166 [2024-07-12 10:44:32.996157] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:16.166 [2024-07-12 10:44:32.996230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1892371 ] 00:06:16.166 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.166 [2024-07-12 10:44:33.077577] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.166 [2024-07-12 10:44:33.147244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.110 10:44:33 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.110 10:44:33 accel -- common/autotest_common.sh@862 -- # return 0 00:06:17.110 10:44:33 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:17.110 10:44:33 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:17.110 10:44:33 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:17.110 10:44:33 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:17.110 10:44:33 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:17.110 10:44:33 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:17.110 10:44:33 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:17.110 10:44:33 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.110 10:44:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.110 10:44:33 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.110 10:44:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.110 10:44:33 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.110 10:44:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.110 10:44:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.110 10:44:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.110 10:44:33 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.110 10:44:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.110 10:44:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.110 10:44:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.110 10:44:33 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.110 10:44:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.110 10:44:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.110 10:44:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.110 10:44:33 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.110 10:44:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.110 10:44:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.110 10:44:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.110 10:44:33 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.110 10:44:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.110 10:44:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.110 10:44:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.110 10:44:33 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.110 10:44:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.110 10:44:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.110 10:44:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.110 10:44:33 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.110 10:44:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.110 10:44:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.110 10:44:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.110 10:44:33 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.110 10:44:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.110 10:44:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.110 10:44:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.110 10:44:33 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.110 10:44:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.110 10:44:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.110 10:44:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.110 10:44:33 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.110 10:44:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.110 10:44:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.110 10:44:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.110 10:44:33 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.110 10:44:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.110 10:44:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.110 10:44:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.110 10:44:33 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.110 10:44:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.110 10:44:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.110 10:44:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.110 10:44:33 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.110 10:44:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.110 10:44:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.110 10:44:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.110 10:44:33 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.110 10:44:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.110 10:44:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.110 10:44:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.110 10:44:33 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.110 10:44:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.110 10:44:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.110 10:44:33 accel -- accel/accel.sh@75 -- # killprocess 1892371 00:06:17.110 10:44:33 accel -- common/autotest_common.sh@948 -- # '[' -z 1892371 ']' 00:06:17.110 10:44:33 accel -- common/autotest_common.sh@952 -- # kill -0 1892371 00:06:17.110 10:44:33 accel -- common/autotest_common.sh@953 -- # uname 00:06:17.110 10:44:33 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:17.110 10:44:33 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1892371 00:06:17.110 10:44:33 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:17.110 10:44:33 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:17.110 10:44:33 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1892371' 00:06:17.110 killing process with pid 1892371 00:06:17.110 10:44:33 accel -- common/autotest_common.sh@967 -- # kill 1892371 00:06:17.110 10:44:33 accel -- common/autotest_common.sh@972 -- # wait 1892371 00:06:17.110 10:44:34 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:17.110 10:44:34 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:17.110 10:44:34 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:17.110 10:44:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.110 10:44:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.380 10:44:34 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:17.380 10:44:34 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:17.380 10:44:34 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:17.380 10:44:34 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.380 10:44:34 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.380 10:44:34 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.380 10:44:34 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.380 10:44:34 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.380 10:44:34 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:17.380 10:44:34 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:17.380 10:44:34 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.380 10:44:34 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:17.380 10:44:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:17.380 10:44:34 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:17.380 10:44:34 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:17.380 10:44:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.380 10:44:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.380 ************************************ 00:06:17.380 START TEST accel_missing_filename 00:06:17.380 ************************************ 00:06:17.380 10:44:34 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:17.380 10:44:34 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:17.380 10:44:34 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:17.380 10:44:34 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:17.380 10:44:34 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.380 10:44:34 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:17.380 10:44:34 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.380 10:44:34 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:17.380 10:44:34 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:17.380 10:44:34 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:17.380 10:44:34 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.380 10:44:34 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.380 10:44:34 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.380 10:44:34 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.380 10:44:34 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.380 10:44:34 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:17.380 10:44:34 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:17.380 [2024-07-12 10:44:34.242015] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:17.380 [2024-07-12 10:44:34.242079] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1892690 ] 00:06:17.380 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.380 [2024-07-12 10:44:34.318503] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.645 [2024-07-12 10:44:34.385403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.645 [2024-07-12 10:44:34.415963] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:17.645 [2024-07-12 10:44:34.450323] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:17.645 A filename is required. 00:06:17.645 10:44:34 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:17.645 10:44:34 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:17.645 10:44:34 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:17.645 10:44:34 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:17.645 10:44:34 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:17.645 10:44:34 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:17.645 00:06:17.645 real 0m0.282s 00:06:17.645 user 0m0.208s 00:06:17.645 sys 0m0.114s 00:06:17.645 10:44:34 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.645 10:44:34 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:17.645 ************************************ 00:06:17.645 END TEST accel_missing_filename 00:06:17.645 ************************************ 00:06:17.645 10:44:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:17.645 10:44:34 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:17.645 10:44:34 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:17.645 10:44:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.645 10:44:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.645 ************************************ 00:06:17.645 START TEST accel_compress_verify 00:06:17.645 ************************************ 00:06:17.645 10:44:34 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:17.645 10:44:34 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:17.645 10:44:34 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:17.645 10:44:34 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:17.645 10:44:34 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.645 10:44:34 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:17.645 10:44:34 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.645 10:44:34 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:17.645 10:44:34 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:17.645 10:44:34 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:17.645 10:44:34 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.645 10:44:34 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.645 10:44:34 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.645 10:44:34 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.645 10:44:34 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.645 10:44:34 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:17.645 10:44:34 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:17.645 [2024-07-12 10:44:34.600598] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:17.645 [2024-07-12 10:44:34.600666] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1892735 ] 00:06:17.645 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.906 [2024-07-12 10:44:34.677091] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.906 [2024-07-12 10:44:34.744109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.906 [2024-07-12 10:44:34.774870] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:17.906 [2024-07-12 10:44:34.809846] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:17.906 00:06:17.906 Compression does not support the verify option, aborting. 00:06:17.906 10:44:34 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:17.906 10:44:34 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:17.906 10:44:34 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:17.906 10:44:34 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:17.906 10:44:34 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:17.906 10:44:34 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:17.906 00:06:17.906 real 0m0.284s 00:06:17.906 user 0m0.199s 00:06:17.906 sys 0m0.125s 00:06:17.906 10:44:34 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.906 10:44:34 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:17.906 ************************************ 00:06:17.906 END TEST accel_compress_verify 00:06:17.906 ************************************ 00:06:17.906 10:44:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:17.906 10:44:34 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:18.168 10:44:34 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:18.168 10:44:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.168 10:44:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.168 ************************************ 00:06:18.168 START TEST accel_wrong_workload 00:06:18.168 ************************************ 00:06:18.168 10:44:34 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:18.168 10:44:34 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:18.168 10:44:34 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:18.168 10:44:34 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:18.168 10:44:34 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:18.168 10:44:34 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:18.168 10:44:34 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:18.168 10:44:34 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:18.168 10:44:34 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:18.168 10:44:34 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:18.168 10:44:34 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.168 10:44:34 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.168 10:44:34 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.168 10:44:34 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.168 10:44:34 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.168 10:44:34 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:18.168 10:44:34 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:18.168 Unsupported workload type: foobar 00:06:18.168 [2024-07-12 10:44:34.958008] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:18.168 accel_perf options: 00:06:18.168 [-h help message] 00:06:18.168 [-q queue depth per core] 00:06:18.168 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:18.168 [-T number of threads per core 00:06:18.168 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:18.168 [-t time in seconds] 00:06:18.168 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:18.168 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:18.168 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:18.169 [-l for compress/decompress workloads, name of uncompressed input file 00:06:18.169 [-S for crc32c workload, use this seed value (default 0) 00:06:18.169 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:18.169 [-f for fill workload, use this BYTE value (default 255) 00:06:18.169 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:18.169 [-y verify result if this switch is on] 00:06:18.169 [-a tasks to allocate per core (default: same value as -q)] 00:06:18.169 Can be used to spread operations across a wider range of memory. 00:06:18.169 10:44:34 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:18.169 10:44:34 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:18.169 10:44:34 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:18.169 10:44:34 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:18.169 00:06:18.169 real 0m0.035s 00:06:18.169 user 0m0.020s 00:06:18.169 sys 0m0.014s 00:06:18.169 10:44:34 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.169 10:44:34 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:18.169 ************************************ 00:06:18.169 END TEST accel_wrong_workload 00:06:18.169 ************************************ 00:06:18.169 Error: writing output failed: Broken pipe 00:06:18.169 10:44:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:18.169 10:44:35 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:18.169 10:44:35 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:18.169 10:44:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.169 10:44:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.169 ************************************ 00:06:18.169 START TEST accel_negative_buffers 00:06:18.169 ************************************ 00:06:18.169 10:44:35 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:18.169 10:44:35 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:18.169 10:44:35 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:18.169 10:44:35 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:18.169 10:44:35 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:18.169 10:44:35 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:18.169 10:44:35 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:18.169 10:44:35 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:18.169 10:44:35 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:18.169 10:44:35 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:18.169 10:44:35 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.169 10:44:35 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.169 10:44:35 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.169 10:44:35 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.169 10:44:35 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.169 10:44:35 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:18.169 10:44:35 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:18.169 -x option must be non-negative. 00:06:18.169 [2024-07-12 10:44:35.073096] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:18.169 accel_perf options: 00:06:18.169 [-h help message] 00:06:18.169 [-q queue depth per core] 00:06:18.169 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:18.169 [-T number of threads per core 00:06:18.169 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:18.169 [-t time in seconds] 00:06:18.169 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:18.169 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:18.169 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:18.169 [-l for compress/decompress workloads, name of uncompressed input file 00:06:18.169 [-S for crc32c workload, use this seed value (default 0) 00:06:18.169 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:18.169 [-f for fill workload, use this BYTE value (default 255) 00:06:18.169 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:18.169 [-y verify result if this switch is on] 00:06:18.169 [-a tasks to allocate per core (default: same value as -q)] 00:06:18.169 Can be used to spread operations across a wider range of memory. 00:06:18.169 10:44:35 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:18.169 10:44:35 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:18.169 10:44:35 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:18.169 10:44:35 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:18.169 00:06:18.169 real 0m0.037s 00:06:18.169 user 0m0.022s 00:06:18.169 sys 0m0.014s 00:06:18.169 10:44:35 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.169 10:44:35 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:18.169 ************************************ 00:06:18.169 END TEST accel_negative_buffers 00:06:18.169 ************************************ 00:06:18.169 Error: writing output failed: Broken pipe 00:06:18.169 10:44:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:18.169 10:44:35 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:18.169 10:44:35 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:18.169 10:44:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.169 10:44:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.431 ************************************ 00:06:18.431 START TEST accel_crc32c 00:06:18.431 ************************************ 00:06:18.431 10:44:35 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:18.431 [2024-07-12 10:44:35.188576] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:18.431 [2024-07-12 10:44:35.188686] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1893081 ] 00:06:18.431 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.431 [2024-07-12 10:44:35.269142] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.431 [2024-07-12 10:44:35.339293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.431 10:44:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.816 10:44:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.816 10:44:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.816 10:44:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.816 10:44:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.816 10:44:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.816 10:44:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.816 10:44:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.816 10:44:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.816 10:44:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.816 10:44:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.816 10:44:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.816 10:44:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.816 10:44:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.816 10:44:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.816 10:44:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.816 10:44:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.816 10:44:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.816 10:44:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.816 10:44:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.816 10:44:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.816 10:44:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.816 10:44:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.816 10:44:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.816 10:44:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.816 10:44:36 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.816 10:44:36 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:19.816 10:44:36 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.816 00:06:19.816 real 0m1.296s 00:06:19.816 user 0m1.191s 00:06:19.816 sys 0m0.118s 00:06:19.816 10:44:36 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.816 10:44:36 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:19.816 ************************************ 00:06:19.816 END TEST accel_crc32c 00:06:19.816 ************************************ 00:06:19.816 10:44:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.816 10:44:36 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:19.816 10:44:36 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:19.816 10:44:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.816 10:44:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.816 ************************************ 00:06:19.816 START TEST accel_crc32c_C2 00:06:19.816 ************************************ 00:06:19.816 10:44:36 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:19.816 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:19.816 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:19.816 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.816 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.816 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:19.816 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:19.816 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.816 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.816 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.816 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.816 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.816 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.816 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:19.816 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:19.816 [2024-07-12 10:44:36.560697] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:19.817 [2024-07-12 10:44:36.560768] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1893263 ] 00:06:19.817 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.817 [2024-07-12 10:44:36.642026] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.817 [2024-07-12 10:44:36.711814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.817 10:44:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.202 10:44:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.202 10:44:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.202 10:44:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.202 10:44:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.202 10:44:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.202 10:44:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.202 10:44:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.202 10:44:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.202 10:44:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.202 10:44:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.202 10:44:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.202 10:44:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.202 10:44:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.202 10:44:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.202 10:44:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.202 10:44:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.202 10:44:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.202 10:44:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.202 10:44:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.202 10:44:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.202 10:44:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.202 10:44:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.202 10:44:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.203 10:44:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.203 10:44:37 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.203 10:44:37 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:21.203 10:44:37 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.203 00:06:21.203 real 0m1.294s 00:06:21.203 user 0m1.192s 00:06:21.203 sys 0m0.115s 00:06:21.203 10:44:37 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.203 10:44:37 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:21.203 ************************************ 00:06:21.203 END TEST accel_crc32c_C2 00:06:21.203 ************************************ 00:06:21.203 10:44:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:21.203 10:44:37 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:21.203 10:44:37 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:21.203 10:44:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.203 10:44:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.203 ************************************ 00:06:21.203 START TEST accel_copy 00:06:21.203 ************************************ 00:06:21.203 10:44:37 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:21.203 10:44:37 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:21.203 10:44:37 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:21.203 10:44:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.203 10:44:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.203 10:44:37 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:21.203 10:44:37 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:21.203 10:44:37 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:21.203 10:44:37 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.203 10:44:37 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.203 10:44:37 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.203 10:44:37 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.203 10:44:37 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.203 10:44:37 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:21.203 10:44:37 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:21.203 [2024-07-12 10:44:37.932528] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:21.203 [2024-07-12 10:44:37.932626] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1893482 ] 00:06:21.203 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.203 [2024-07-12 10:44:38.032239] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.203 [2024-07-12 10:44:38.102704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.203 10:44:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.587 10:44:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:22.587 10:44:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.587 10:44:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.587 10:44:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.587 10:44:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:22.587 10:44:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.587 10:44:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.587 10:44:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.587 10:44:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:22.587 10:44:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.587 10:44:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.587 10:44:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.587 10:44:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:22.587 10:44:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.587 10:44:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.587 10:44:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.587 10:44:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:22.587 10:44:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.587 10:44:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.587 10:44:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.587 10:44:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:22.587 10:44:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.587 10:44:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.587 10:44:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.587 10:44:39 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:22.587 10:44:39 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:22.587 10:44:39 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.587 00:06:22.587 real 0m1.315s 00:06:22.587 user 0m1.198s 00:06:22.587 sys 0m0.127s 00:06:22.587 10:44:39 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.587 10:44:39 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:22.587 ************************************ 00:06:22.587 END TEST accel_copy 00:06:22.587 ************************************ 00:06:22.587 10:44:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:22.587 10:44:39 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:22.587 10:44:39 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:22.587 10:44:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.587 10:44:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.587 ************************************ 00:06:22.587 START TEST accel_fill 00:06:22.587 ************************************ 00:06:22.587 10:44:39 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:22.587 10:44:39 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:22.587 10:44:39 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:22.588 [2024-07-12 10:44:39.324163] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:22.588 [2024-07-12 10:44:39.324260] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1893837 ] 00:06:22.588 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.588 [2024-07-12 10:44:39.402466] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.588 [2024-07-12 10:44:39.470246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.588 10:44:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.971 10:44:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:23.971 10:44:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:23.971 10:44:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:23.971 10:44:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.971 10:44:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:23.971 10:44:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:23.971 10:44:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:23.971 10:44:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.971 10:44:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:23.971 10:44:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:23.971 10:44:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:23.971 10:44:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.971 10:44:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:23.971 10:44:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:23.971 10:44:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:23.971 10:44:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.971 10:44:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:23.971 10:44:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:23.971 10:44:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:23.971 10:44:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.971 10:44:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:23.971 10:44:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:23.971 10:44:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:23.971 10:44:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.971 10:44:40 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.971 10:44:40 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:23.971 10:44:40 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.971 00:06:23.971 real 0m1.292s 00:06:23.971 user 0m1.183s 00:06:23.971 sys 0m0.120s 00:06:23.971 10:44:40 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.971 10:44:40 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:23.971 ************************************ 00:06:23.971 END TEST accel_fill 00:06:23.971 ************************************ 00:06:23.971 10:44:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:23.971 10:44:40 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:23.971 10:44:40 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:23.971 10:44:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.971 10:44:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.971 ************************************ 00:06:23.971 START TEST accel_copy_crc32c 00:06:23.971 ************************************ 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:23.971 [2024-07-12 10:44:40.691838] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:23.971 [2024-07-12 10:44:40.691938] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1894184 ] 00:06:23.971 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.971 [2024-07-12 10:44:40.780450] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.971 [2024-07-12 10:44:40.846276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:23.971 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.972 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.972 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.972 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.972 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.972 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.972 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.972 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.972 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.972 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.972 10:44:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.398 10:44:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.398 10:44:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.398 10:44:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.398 10:44:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.398 10:44:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.398 10:44:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.398 10:44:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.398 10:44:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.398 10:44:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.398 10:44:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.398 10:44:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.398 10:44:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.398 10:44:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.398 10:44:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.398 10:44:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.398 10:44:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.398 10:44:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.398 10:44:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.398 10:44:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.398 10:44:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.398 10:44:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.398 10:44:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.398 10:44:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.398 10:44:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.398 10:44:41 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.398 10:44:41 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:25.398 10:44:41 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.398 00:06:25.398 real 0m1.299s 00:06:25.398 user 0m1.190s 00:06:25.398 sys 0m0.122s 00:06:25.398 10:44:41 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.398 10:44:41 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:25.398 ************************************ 00:06:25.398 END TEST accel_copy_crc32c 00:06:25.398 ************************************ 00:06:25.398 10:44:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:25.398 10:44:41 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:25.398 10:44:41 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:25.398 10:44:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.398 10:44:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.398 ************************************ 00:06:25.398 START TEST accel_copy_crc32c_C2 00:06:25.398 ************************************ 00:06:25.398 10:44:42 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:25.398 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:25.398 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:25.399 [2024-07-12 10:44:42.067390] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:25.399 [2024-07-12 10:44:42.067456] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1894539 ] 00:06:25.399 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.399 [2024-07-12 10:44:42.145716] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.399 [2024-07-12 10:44:42.215456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.399 10:44:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.782 10:44:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.782 10:44:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.782 10:44:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.782 10:44:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.782 10:44:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.782 10:44:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.782 10:44:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.782 10:44:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.782 10:44:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.782 10:44:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.782 10:44:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.782 10:44:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.782 10:44:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.782 10:44:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.782 10:44:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.782 10:44:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.782 10:44:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.782 10:44:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.782 10:44:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.782 10:44:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.782 10:44:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.782 10:44:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.782 10:44:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.782 10:44:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.782 10:44:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.782 10:44:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:26.782 10:44:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.782 00:06:26.782 real 0m1.293s 00:06:26.782 user 0m1.191s 00:06:26.782 sys 0m0.116s 00:06:26.782 10:44:43 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.782 10:44:43 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:26.782 ************************************ 00:06:26.782 END TEST accel_copy_crc32c_C2 00:06:26.782 ************************************ 00:06:26.782 10:44:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:26.782 10:44:43 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:26.782 10:44:43 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:26.782 10:44:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.782 10:44:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.782 ************************************ 00:06:26.782 START TEST accel_dualcast 00:06:26.782 ************************************ 00:06:26.782 10:44:43 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:26.782 10:44:43 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:26.782 10:44:43 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:26.782 10:44:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.782 10:44:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.782 10:44:43 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:26.782 10:44:43 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:26.782 10:44:43 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:26.783 [2024-07-12 10:44:43.436942] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:26.783 [2024-07-12 10:44:43.437040] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1894734 ] 00:06:26.783 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.783 [2024-07-12 10:44:43.516302] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.783 [2024-07-12 10:44:43.585289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.783 10:44:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:27.725 10:44:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:27.725 10:44:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:27.725 10:44:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:27.725 10:44:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:27.725 10:44:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:27.725 10:44:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:27.725 10:44:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:27.725 10:44:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:27.725 10:44:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:27.725 10:44:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:27.725 10:44:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:27.725 10:44:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:27.725 10:44:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:27.725 10:44:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:27.725 10:44:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:27.725 10:44:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:27.725 10:44:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:27.725 10:44:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:27.725 10:44:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:27.725 10:44:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:27.725 10:44:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:27.725 10:44:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:27.725 10:44:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:27.725 10:44:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:27.725 10:44:44 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.725 10:44:44 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:27.725 10:44:44 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.725 00:06:27.725 real 0m1.293s 00:06:27.725 user 0m1.180s 00:06:27.725 sys 0m0.124s 00:06:27.725 10:44:44 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.725 10:44:44 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:27.725 ************************************ 00:06:27.725 END TEST accel_dualcast 00:06:27.725 ************************************ 00:06:27.986 10:44:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:27.986 10:44:44 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:27.986 10:44:44 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:27.986 10:44:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.986 10:44:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.986 ************************************ 00:06:27.986 START TEST accel_compare 00:06:27.986 ************************************ 00:06:27.986 10:44:44 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:27.986 10:44:44 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:27.986 10:44:44 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:27.986 10:44:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.986 10:44:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.986 10:44:44 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:27.986 10:44:44 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:27.986 10:44:44 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:27.986 10:44:44 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.986 10:44:44 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.986 10:44:44 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.986 10:44:44 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.986 10:44:44 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.986 10:44:44 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:27.986 10:44:44 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:27.986 [2024-07-12 10:44:44.799602] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:27.986 [2024-07-12 10:44:44.799661] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1894933 ] 00:06:27.986 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.986 [2024-07-12 10:44:44.879985] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.986 [2024-07-12 10:44:44.952057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.247 10:44:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.189 10:44:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.189 10:44:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.189 10:44:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.189 10:44:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.189 10:44:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.189 10:44:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.189 10:44:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.189 10:44:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.189 10:44:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.189 10:44:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.189 10:44:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.189 10:44:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.189 10:44:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.189 10:44:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.189 10:44:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.189 10:44:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.189 10:44:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.189 10:44:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.189 10:44:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.189 10:44:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.189 10:44:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.189 10:44:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.189 10:44:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.189 10:44:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.189 10:44:46 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.189 10:44:46 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:29.189 10:44:46 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.189 00:06:29.189 real 0m1.293s 00:06:29.189 user 0m1.187s 00:06:29.189 sys 0m0.116s 00:06:29.189 10:44:46 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.189 10:44:46 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:29.189 ************************************ 00:06:29.189 END TEST accel_compare 00:06:29.189 ************************************ 00:06:29.189 10:44:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:29.189 10:44:46 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:29.189 10:44:46 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:29.189 10:44:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.189 10:44:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.189 ************************************ 00:06:29.189 START TEST accel_xor 00:06:29.189 ************************************ 00:06:29.189 10:44:46 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:29.189 10:44:46 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:29.189 10:44:46 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:29.189 10:44:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.189 10:44:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.189 10:44:46 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:29.189 10:44:46 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:29.189 10:44:46 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:29.189 10:44:46 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.189 10:44:46 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.189 10:44:46 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.189 10:44:46 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.189 10:44:46 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.189 10:44:46 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:29.189 10:44:46 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:29.451 [2024-07-12 10:44:46.176090] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:29.451 [2024-07-12 10:44:46.176200] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1895271 ] 00:06:29.451 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.451 [2024-07-12 10:44:46.258608] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.451 [2024-07-12 10:44:46.319228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.451 10:44:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.836 10:44:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.836 10:44:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.836 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.836 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.836 10:44:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.836 10:44:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.836 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.836 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.836 10:44:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.836 10:44:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.836 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.836 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.836 10:44:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.836 10:44:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.836 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.836 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.836 10:44:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.836 10:44:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.836 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.836 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.836 10:44:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.836 10:44:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.836 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.836 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.836 10:44:47 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.836 10:44:47 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:30.836 10:44:47 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.836 00:06:30.836 real 0m1.291s 00:06:30.836 user 0m1.184s 00:06:30.837 sys 0m0.118s 00:06:30.837 10:44:47 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.837 10:44:47 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:30.837 ************************************ 00:06:30.837 END TEST accel_xor 00:06:30.837 ************************************ 00:06:30.837 10:44:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:30.837 10:44:47 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:30.837 10:44:47 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:30.837 10:44:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.837 10:44:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.837 ************************************ 00:06:30.837 START TEST accel_xor 00:06:30.837 ************************************ 00:06:30.837 10:44:47 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:30.837 [2024-07-12 10:44:47.537720] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:30.837 [2024-07-12 10:44:47.537783] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1895629 ] 00:06:30.837 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.837 [2024-07-12 10:44:47.613064] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.837 [2024-07-12 10:44:47.674600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.837 10:44:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.223 10:44:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.223 10:44:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.223 10:44:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.223 10:44:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.223 10:44:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.223 10:44:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.223 10:44:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.223 10:44:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.223 10:44:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.223 10:44:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.223 10:44:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.223 10:44:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.223 10:44:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.223 10:44:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.223 10:44:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.223 10:44:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.223 10:44:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.223 10:44:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.223 10:44:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.223 10:44:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.223 10:44:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.223 10:44:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.223 10:44:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.223 10:44:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.223 10:44:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.223 10:44:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:32.223 10:44:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.223 00:06:32.223 real 0m1.280s 00:06:32.223 user 0m1.181s 00:06:32.223 sys 0m0.109s 00:06:32.223 10:44:48 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.223 10:44:48 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:32.223 ************************************ 00:06:32.223 END TEST accel_xor 00:06:32.223 ************************************ 00:06:32.223 10:44:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:32.223 10:44:48 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:32.223 10:44:48 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:32.223 10:44:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.223 10:44:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.223 ************************************ 00:06:32.223 START TEST accel_dif_verify 00:06:32.223 ************************************ 00:06:32.223 10:44:48 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:32.223 10:44:48 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:32.223 10:44:48 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:32.223 10:44:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.223 10:44:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.223 10:44:48 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:32.223 10:44:48 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:32.223 10:44:48 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:32.223 10:44:48 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.223 10:44:48 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.223 10:44:48 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.223 10:44:48 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.223 10:44:48 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.223 10:44:48 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:32.223 10:44:48 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:32.223 [2024-07-12 10:44:48.894337] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:32.223 [2024-07-12 10:44:48.894431] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1895976 ] 00:06:32.223 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.223 [2024-07-12 10:44:48.972517] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.223 [2024-07-12 10:44:49.043152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.223 10:44:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:32.223 10:44:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.223 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.223 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.223 10:44:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:32.223 10:44:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.223 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.223 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.223 10:44:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:32.223 10:44:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.223 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.223 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.223 10:44:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:32.223 10:44:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.223 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.223 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.223 10:44:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:32.223 10:44:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.223 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.223 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.223 10:44:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:32.223 10:44:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.223 10:44:49 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:32.223 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.223 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.223 10:44:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.223 10:44:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.223 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.223 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.224 10:44:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.610 10:44:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:33.610 10:44:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.610 10:44:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.610 10:44:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.610 10:44:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:33.610 10:44:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.610 10:44:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.610 10:44:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.610 10:44:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:33.610 10:44:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.610 10:44:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.610 10:44:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.610 10:44:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:33.610 10:44:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.610 10:44:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.611 10:44:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.611 10:44:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:33.611 10:44:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.611 10:44:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.611 10:44:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.611 10:44:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:33.611 10:44:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.611 10:44:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.611 10:44:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.611 10:44:50 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.611 10:44:50 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:33.611 10:44:50 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.611 00:06:33.611 real 0m1.296s 00:06:33.611 user 0m1.194s 00:06:33.611 sys 0m0.115s 00:06:33.611 10:44:50 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.611 10:44:50 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:33.611 ************************************ 00:06:33.611 END TEST accel_dif_verify 00:06:33.611 ************************************ 00:06:33.611 10:44:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:33.611 10:44:50 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:33.611 10:44:50 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:33.611 10:44:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.611 10:44:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.611 ************************************ 00:06:33.611 START TEST accel_dif_generate 00:06:33.611 ************************************ 00:06:33.611 10:44:50 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:33.611 [2024-07-12 10:44:50.263448] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:33.611 [2024-07-12 10:44:50.263528] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1896179 ] 00:06:33.611 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.611 [2024-07-12 10:44:50.343170] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.611 [2024-07-12 10:44:50.414435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.611 10:44:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.583 10:44:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:34.583 10:44:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.583 10:44:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.583 10:44:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.583 10:44:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:34.583 10:44:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.583 10:44:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.583 10:44:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.583 10:44:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:34.583 10:44:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.583 10:44:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.583 10:44:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.583 10:44:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:34.583 10:44:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.583 10:44:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.583 10:44:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.583 10:44:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:34.583 10:44:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.583 10:44:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.583 10:44:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.583 10:44:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:34.583 10:44:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.583 10:44:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.583 10:44:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.583 10:44:51 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.583 10:44:51 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:34.583 10:44:51 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.583 00:06:34.583 real 0m1.295s 00:06:34.583 user 0m1.184s 00:06:34.583 sys 0m0.124s 00:06:34.583 10:44:51 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.583 10:44:51 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:34.583 ************************************ 00:06:34.583 END TEST accel_dif_generate 00:06:34.583 ************************************ 00:06:34.844 10:44:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:34.844 10:44:51 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:34.844 10:44:51 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:34.844 10:44:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.844 10:44:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.844 ************************************ 00:06:34.844 START TEST accel_dif_generate_copy 00:06:34.844 ************************************ 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:34.844 [2024-07-12 10:44:51.637886] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:34.844 [2024-07-12 10:44:51.637955] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1896385 ] 00:06:34.844 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.844 [2024-07-12 10:44:51.720022] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.844 [2024-07-12 10:44:51.790480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.844 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.105 10:44:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.049 10:44:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.049 10:44:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.049 10:44:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.049 10:44:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.049 10:44:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.049 10:44:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.049 10:44:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.049 10:44:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.049 10:44:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.049 10:44:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.049 10:44:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.049 10:44:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.049 10:44:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.049 10:44:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.049 10:44:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.049 10:44:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.049 10:44:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.049 10:44:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.049 10:44:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.049 10:44:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.049 10:44:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.049 10:44:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.049 10:44:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.049 10:44:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.049 10:44:52 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.049 10:44:52 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:36.049 10:44:52 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.049 00:06:36.049 real 0m1.298s 00:06:36.049 user 0m1.183s 00:06:36.049 sys 0m0.127s 00:06:36.049 10:44:52 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.049 10:44:52 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:36.049 ************************************ 00:06:36.049 END TEST accel_dif_generate_copy 00:06:36.049 ************************************ 00:06:36.049 10:44:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:36.049 10:44:52 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:36.049 10:44:52 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:36.049 10:44:52 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:36.049 10:44:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.049 10:44:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.049 ************************************ 00:06:36.049 START TEST accel_comp 00:06:36.049 ************************************ 00:06:36.049 10:44:52 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:36.049 10:44:52 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:36.049 10:44:52 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:36.049 10:44:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.049 10:44:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.049 10:44:52 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:36.049 10:44:52 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:36.049 10:44:52 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:36.049 10:44:52 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.049 10:44:52 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.049 10:44:52 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.049 10:44:52 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.049 10:44:52 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.049 10:44:52 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:36.049 10:44:52 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:36.049 [2024-07-12 10:44:53.010003] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:36.049 [2024-07-12 10:44:53.010082] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1896717 ] 00:06:36.309 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.309 [2024-07-12 10:44:53.088153] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.309 [2024-07-12 10:44:53.146613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.309 10:44:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.692 10:44:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:37.692 10:44:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.693 10:44:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.693 10:44:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.693 10:44:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:37.693 10:44:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.693 10:44:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.693 10:44:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.693 10:44:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:37.693 10:44:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.693 10:44:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.693 10:44:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.693 10:44:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:37.693 10:44:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.693 10:44:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.693 10:44:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.693 10:44:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:37.693 10:44:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.693 10:44:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.693 10:44:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.693 10:44:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:37.693 10:44:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.693 10:44:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.693 10:44:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.693 10:44:54 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.693 10:44:54 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:37.693 10:44:54 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.693 00:06:37.693 real 0m1.280s 00:06:37.693 user 0m1.179s 00:06:37.693 sys 0m0.114s 00:06:37.693 10:44:54 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.693 10:44:54 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:37.693 ************************************ 00:06:37.693 END TEST accel_comp 00:06:37.693 ************************************ 00:06:37.693 10:44:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:37.693 10:44:54 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:37.693 10:44:54 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:37.693 10:44:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.693 10:44:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.693 ************************************ 00:06:37.693 START TEST accel_decomp 00:06:37.693 ************************************ 00:06:37.693 10:44:54 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:37.693 [2024-07-12 10:44:54.367829] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:37.693 [2024-07-12 10:44:54.367929] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1897069 ] 00:06:37.693 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.693 [2024-07-12 10:44:54.448819] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.693 [2024-07-12 10:44:54.517610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.693 10:44:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.079 10:44:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.079 10:44:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.079 10:44:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.079 10:44:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.079 10:44:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.079 10:44:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.079 10:44:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.079 10:44:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.079 10:44:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.079 10:44:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.079 10:44:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.079 10:44:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.079 10:44:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.079 10:44:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.079 10:44:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.079 10:44:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.079 10:44:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.079 10:44:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.079 10:44:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.079 10:44:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.079 10:44:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.079 10:44:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.079 10:44:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.079 10:44:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.079 10:44:55 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.079 10:44:55 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:39.079 10:44:55 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.079 00:06:39.079 real 0m1.296s 00:06:39.079 user 0m1.182s 00:06:39.079 sys 0m0.126s 00:06:39.079 10:44:55 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.079 10:44:55 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:39.079 ************************************ 00:06:39.079 END TEST accel_decomp 00:06:39.079 ************************************ 00:06:39.079 10:44:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:39.079 10:44:55 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:39.079 10:44:55 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:39.079 10:44:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.079 10:44:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.079 ************************************ 00:06:39.079 START TEST accel_decomp_full 00:06:39.079 ************************************ 00:06:39.079 10:44:55 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:39.079 [2024-07-12 10:44:55.739057] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:39.079 [2024-07-12 10:44:55.739127] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1897417 ] 00:06:39.079 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.079 [2024-07-12 10:44:55.818163] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.079 [2024-07-12 10:44:55.885027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.079 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.080 10:44:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.023 10:44:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.023 10:44:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.023 10:44:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.023 10:44:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.023 10:44:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.023 10:44:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.023 10:44:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.023 10:44:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.023 10:44:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.023 10:44:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.023 10:44:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.023 10:44:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.023 10:44:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.023 10:44:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.023 10:44:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.023 10:44:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.284 10:44:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.284 10:44:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.284 10:44:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.285 10:44:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.285 10:44:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.285 10:44:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.285 10:44:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.285 10:44:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.285 10:44:57 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.285 10:44:57 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:40.285 10:44:57 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.285 00:06:40.285 real 0m1.299s 00:06:40.285 user 0m1.190s 00:06:40.285 sys 0m0.121s 00:06:40.285 10:44:57 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.285 10:44:57 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:40.285 ************************************ 00:06:40.285 END TEST accel_decomp_full 00:06:40.285 ************************************ 00:06:40.285 10:44:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:40.285 10:44:57 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:40.285 10:44:57 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:40.285 10:44:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.285 10:44:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.285 ************************************ 00:06:40.285 START TEST accel_decomp_mcore 00:06:40.285 ************************************ 00:06:40.285 10:44:57 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:40.285 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:40.285 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:40.285 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.285 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.285 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:40.285 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:40.285 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:40.285 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.285 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.285 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.285 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.285 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.285 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:40.285 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:40.285 [2024-07-12 10:44:57.113992] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:40.285 [2024-07-12 10:44:57.114056] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1897653 ] 00:06:40.285 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.285 [2024-07-12 10:44:57.194383] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:40.285 [2024-07-12 10:44:57.267703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.285 [2024-07-12 10:44:57.267855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.285 [2024-07-12 10:44:57.268011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.285 [2024-07-12 10:44:57.268012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.546 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.547 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.547 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:40.547 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.547 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.547 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.547 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.547 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.547 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.547 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.547 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:40.547 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.547 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.547 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.547 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.547 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.547 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.547 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.547 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.547 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.547 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.547 10:44:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.487 00:06:41.487 real 0m1.309s 00:06:41.487 user 0m4.419s 00:06:41.487 sys 0m0.125s 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.487 10:44:58 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:41.487 ************************************ 00:06:41.487 END TEST accel_decomp_mcore 00:06:41.487 ************************************ 00:06:41.487 10:44:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:41.487 10:44:58 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:41.487 10:44:58 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:41.487 10:44:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.487 10:44:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.748 ************************************ 00:06:41.748 START TEST accel_decomp_full_mcore 00:06:41.748 ************************************ 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:41.748 [2024-07-12 10:44:58.499487] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:41.748 [2024-07-12 10:44:58.499570] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1897847 ] 00:06:41.748 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.748 [2024-07-12 10:44:58.578512] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:41.748 [2024-07-12 10:44:58.651479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.748 [2024-07-12 10:44:58.651635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.748 [2024-07-12 10:44:58.651789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.748 [2024-07-12 10:44:58.651790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.748 10:44:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.134 00:06:43.134 real 0m1.320s 00:06:43.134 user 0m4.471s 00:06:43.134 sys 0m0.126s 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.134 10:44:59 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:43.134 ************************************ 00:06:43.134 END TEST accel_decomp_full_mcore 00:06:43.134 ************************************ 00:06:43.134 10:44:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:43.134 10:44:59 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:43.134 10:44:59 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:43.134 10:44:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.134 10:44:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.134 ************************************ 00:06:43.134 START TEST accel_decomp_mthread 00:06:43.134 ************************************ 00:06:43.134 10:44:59 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:43.134 10:44:59 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:43.134 10:44:59 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:43.134 10:44:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.134 10:44:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.134 10:44:59 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:43.134 10:44:59 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:43.134 10:44:59 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:43.134 10:44:59 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.134 10:44:59 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.134 10:44:59 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.134 10:44:59 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.134 10:44:59 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.134 10:44:59 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:43.134 10:44:59 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:43.134 [2024-07-12 10:44:59.895902] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:43.134 [2024-07-12 10:44:59.895981] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1898164 ] 00:06:43.134 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.134 [2024-07-12 10:44:59.974386] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.134 [2024-07-12 10:45:00.037884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.134 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.134 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.134 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.134 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.134 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.134 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.134 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.134 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.134 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.134 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.134 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.134 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.134 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:43.134 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.134 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.134 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.134 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.134 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.134 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.134 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.134 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.134 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.134 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.134 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.134 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:43.134 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.134 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:43.134 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.134 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.134 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.134 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.134 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.134 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.134 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.135 10:45:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.517 10:45:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:44.518 10:45:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.518 10:45:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.518 10:45:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.518 10:45:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:44.518 10:45:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.518 10:45:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.518 10:45:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.518 10:45:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:44.518 10:45:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.518 10:45:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.518 10:45:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.518 10:45:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:44.518 10:45:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.518 10:45:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.518 10:45:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.518 10:45:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:44.518 10:45:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.518 10:45:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.518 10:45:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.518 10:45:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:44.518 10:45:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.518 10:45:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.518 10:45:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.518 10:45:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:44.518 10:45:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.518 10:45:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.518 10:45:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.518 10:45:01 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:44.518 10:45:01 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:44.518 10:45:01 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.518 00:06:44.518 real 0m1.291s 00:06:44.518 user 0m1.184s 00:06:44.518 sys 0m0.118s 00:06:44.518 10:45:01 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.518 10:45:01 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:44.518 ************************************ 00:06:44.518 END TEST accel_decomp_mthread 00:06:44.518 ************************************ 00:06:44.518 10:45:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:44.518 10:45:01 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:44.518 10:45:01 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:44.518 10:45:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.518 10:45:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.518 ************************************ 00:06:44.518 START TEST accel_decomp_full_mthread 00:06:44.518 ************************************ 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:44.518 [2024-07-12 10:45:01.263896] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:44.518 [2024-07-12 10:45:01.263964] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1898582 ] 00:06:44.518 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.518 [2024-07-12 10:45:01.343088] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.518 [2024-07-12 10:45:01.405794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.518 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.519 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:44.519 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.519 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:44.519 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.519 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.519 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:44.519 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.519 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.519 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.519 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:44.519 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.519 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.519 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.519 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:44.519 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.519 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.519 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.519 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:44.519 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.519 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.519 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.519 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.519 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.519 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.519 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.519 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:44.519 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.519 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.519 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.519 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:44.519 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.519 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.519 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.519 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:44.519 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.519 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.519 10:45:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.902 10:45:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.902 10:45:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.902 10:45:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.902 10:45:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.902 10:45:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.902 10:45:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.902 10:45:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.902 10:45:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.902 10:45:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.902 10:45:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.902 10:45:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.902 10:45:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.902 10:45:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.902 10:45:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.902 10:45:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.902 10:45:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.902 10:45:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.902 10:45:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.902 10:45:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.902 10:45:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.902 10:45:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.902 10:45:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.902 10:45:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.902 10:45:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.902 10:45:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.902 10:45:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.902 10:45:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.902 10:45:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.902 10:45:02 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.902 10:45:02 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:45.902 10:45:02 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.902 00:06:45.902 real 0m1.310s 00:06:45.902 user 0m1.206s 00:06:45.902 sys 0m0.116s 00:06:45.902 10:45:02 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.902 10:45:02 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:45.902 ************************************ 00:06:45.902 END TEST accel_decomp_full_mthread 00:06:45.902 ************************************ 00:06:45.902 10:45:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:45.902 10:45:02 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:45.902 10:45:02 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:45.902 10:45:02 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:45.902 10:45:02 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:45.902 10:45:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.902 10:45:02 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.902 10:45:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.902 10:45:02 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.902 10:45:02 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.902 10:45:02 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.902 10:45:02 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.902 10:45:02 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:45.902 10:45:02 accel -- accel/accel.sh@41 -- # jq -r . 00:06:45.902 ************************************ 00:06:45.902 START TEST accel_dif_functional_tests 00:06:45.902 ************************************ 00:06:45.902 10:45:02 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:45.902 [2024-07-12 10:45:02.671591] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:45.902 [2024-07-12 10:45:02.671646] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1898976 ] 00:06:45.902 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.902 [2024-07-12 10:45:02.750848] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.902 [2024-07-12 10:45:02.822896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.902 [2024-07-12 10:45:02.823052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.902 [2024-07-12 10:45:02.823052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.902 00:06:45.902 00:06:45.902 CUnit - A unit testing framework for C - Version 2.1-3 00:06:45.902 http://cunit.sourceforge.net/ 00:06:45.902 00:06:45.902 00:06:45.902 Suite: accel_dif 00:06:45.902 Test: verify: DIF generated, GUARD check ...passed 00:06:45.902 Test: verify: DIF generated, APPTAG check ...passed 00:06:45.902 Test: verify: DIF generated, REFTAG check ...passed 00:06:45.902 Test: verify: DIF not generated, GUARD check ...[2024-07-12 10:45:02.876757] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:45.902 passed 00:06:45.902 Test: verify: DIF not generated, APPTAG check ...[2024-07-12 10:45:02.876799] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:45.903 passed 00:06:45.903 Test: verify: DIF not generated, REFTAG check ...[2024-07-12 10:45:02.876816] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:45.903 passed 00:06:45.903 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:45.903 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-12 10:45:02.876858] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:45.903 passed 00:06:45.903 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:45.903 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:45.903 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:45.903 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-12 10:45:02.876953] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:45.903 passed 00:06:45.903 Test: verify copy: DIF generated, GUARD check ...passed 00:06:45.903 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:45.903 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:45.903 Test: verify copy: DIF not generated, GUARD check ...[2024-07-12 10:45:02.877056] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:45.903 passed 00:06:45.903 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-12 10:45:02.877076] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:45.903 passed 00:06:45.903 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-12 10:45:02.877095] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:45.903 passed 00:06:45.903 Test: generate copy: DIF generated, GUARD check ...passed 00:06:45.903 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:45.903 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:45.903 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:45.903 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:45.903 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:45.903 Test: generate copy: iovecs-len validate ...[2024-07-12 10:45:02.877262] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:45.903 passed 00:06:45.903 Test: generate copy: buffer alignment validate ...passed 00:06:45.903 00:06:45.903 Run Summary: Type Total Ran Passed Failed Inactive 00:06:45.903 suites 1 1 n/a 0 0 00:06:45.903 tests 26 26 26 0 0 00:06:45.903 asserts 115 115 115 0 n/a 00:06:45.903 00:06:45.903 Elapsed time = 0.002 seconds 00:06:46.163 00:06:46.163 real 0m0.352s 00:06:46.163 user 0m0.458s 00:06:46.163 sys 0m0.143s 00:06:46.163 10:45:02 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.163 10:45:02 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:46.163 ************************************ 00:06:46.163 END TEST accel_dif_functional_tests 00:06:46.163 ************************************ 00:06:46.163 10:45:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:46.163 00:06:46.163 real 0m30.181s 00:06:46.163 user 0m33.331s 00:06:46.163 sys 0m4.567s 00:06:46.163 10:45:03 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.163 10:45:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.163 ************************************ 00:06:46.163 END TEST accel 00:06:46.163 ************************************ 00:06:46.163 10:45:03 -- common/autotest_common.sh@1142 -- # return 0 00:06:46.163 10:45:03 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:46.163 10:45:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:46.163 10:45:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.163 10:45:03 -- common/autotest_common.sh@10 -- # set +x 00:06:46.163 ************************************ 00:06:46.163 START TEST accel_rpc 00:06:46.163 ************************************ 00:06:46.163 10:45:03 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:46.423 * Looking for test storage... 00:06:46.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:46.423 10:45:03 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:46.423 10:45:03 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1899043 00:06:46.423 10:45:03 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1899043 00:06:46.423 10:45:03 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:46.423 10:45:03 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 1899043 ']' 00:06:46.423 10:45:03 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.423 10:45:03 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.423 10:45:03 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.423 10:45:03 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.423 10:45:03 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.423 [2024-07-12 10:45:03.251022] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:46.423 [2024-07-12 10:45:03.251096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1899043 ] 00:06:46.423 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.423 [2024-07-12 10:45:03.329477] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.423 [2024-07-12 10:45:03.391887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.364 10:45:04 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.364 10:45:04 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:47.364 10:45:04 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:47.364 10:45:04 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:47.364 10:45:04 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:47.364 10:45:04 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:47.364 10:45:04 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:47.364 10:45:04 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:47.364 10:45:04 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.364 10:45:04 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.365 ************************************ 00:06:47.365 START TEST accel_assign_opcode 00:06:47.365 ************************************ 00:06:47.365 10:45:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:47.365 10:45:04 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:47.365 10:45:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.365 10:45:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:47.365 [2024-07-12 10:45:04.049735] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:47.365 10:45:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.365 10:45:04 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:47.365 10:45:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.365 10:45:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:47.365 [2024-07-12 10:45:04.061757] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:47.365 10:45:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.365 10:45:04 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:47.365 10:45:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.365 10:45:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:47.365 10:45:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.365 10:45:04 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:47.365 10:45:04 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:47.365 10:45:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.365 10:45:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:47.365 10:45:04 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:47.365 10:45:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.365 software 00:06:47.365 00:06:47.365 real 0m0.196s 00:06:47.365 user 0m0.044s 00:06:47.365 sys 0m0.014s 00:06:47.365 10:45:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.365 10:45:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:47.365 ************************************ 00:06:47.365 END TEST accel_assign_opcode 00:06:47.365 ************************************ 00:06:47.365 10:45:04 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:47.365 10:45:04 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1899043 00:06:47.365 10:45:04 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 1899043 ']' 00:06:47.365 10:45:04 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 1899043 00:06:47.365 10:45:04 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:47.365 10:45:04 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:47.365 10:45:04 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1899043 00:06:47.365 10:45:04 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:47.365 10:45:04 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:47.365 10:45:04 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1899043' 00:06:47.365 killing process with pid 1899043 00:06:47.365 10:45:04 accel_rpc -- common/autotest_common.sh@967 -- # kill 1899043 00:06:47.365 10:45:04 accel_rpc -- common/autotest_common.sh@972 -- # wait 1899043 00:06:47.625 00:06:47.625 real 0m1.419s 00:06:47.625 user 0m1.490s 00:06:47.625 sys 0m0.416s 00:06:47.625 10:45:04 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.625 10:45:04 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.625 ************************************ 00:06:47.625 END TEST accel_rpc 00:06:47.625 ************************************ 00:06:47.625 10:45:04 -- common/autotest_common.sh@1142 -- # return 0 00:06:47.625 10:45:04 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:47.625 10:45:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:47.625 10:45:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.625 10:45:04 -- common/autotest_common.sh@10 -- # set +x 00:06:47.625 ************************************ 00:06:47.625 START TEST app_cmdline 00:06:47.625 ************************************ 00:06:47.625 10:45:04 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:47.886 * Looking for test storage... 00:06:47.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:47.886 10:45:04 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:47.886 10:45:04 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1899451 00:06:47.886 10:45:04 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1899451 00:06:47.886 10:45:04 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:47.886 10:45:04 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 1899451 ']' 00:06:47.886 10:45:04 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.886 10:45:04 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.886 10:45:04 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.886 10:45:04 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.886 10:45:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:47.886 [2024-07-12 10:45:04.748551] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:47.886 [2024-07-12 10:45:04.748619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1899451 ] 00:06:47.886 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.886 [2024-07-12 10:45:04.830175] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.146 [2024-07-12 10:45:04.893901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.717 10:45:05 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.717 10:45:05 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:48.717 10:45:05 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:48.717 { 00:06:48.717 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:06:48.717 "fields": { 00:06:48.717 "major": 24, 00:06:48.717 "minor": 9, 00:06:48.717 "patch": 0, 00:06:48.717 "suffix": "-pre", 00:06:48.717 "commit": "719d03c6a" 00:06:48.717 } 00:06:48.717 } 00:06:48.717 10:45:05 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:48.717 10:45:05 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:48.717 10:45:05 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:48.717 10:45:05 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:48.717 10:45:05 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:48.717 10:45:05 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.717 10:45:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:48.717 10:45:05 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:48.717 10:45:05 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:48.977 10:45:05 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.977 10:45:05 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:48.977 10:45:05 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:48.977 10:45:05 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:48.977 10:45:05 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:48.977 10:45:05 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:48.977 10:45:05 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:48.977 10:45:05 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.977 10:45:05 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:48.978 10:45:05 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.978 10:45:05 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:48.978 10:45:05 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.978 10:45:05 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:48.978 10:45:05 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:48.978 10:45:05 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:48.978 request: 00:06:48.978 { 00:06:48.978 "method": "env_dpdk_get_mem_stats", 00:06:48.978 "req_id": 1 00:06:48.978 } 00:06:48.978 Got JSON-RPC error response 00:06:48.978 response: 00:06:48.978 { 00:06:48.978 "code": -32601, 00:06:48.978 "message": "Method not found" 00:06:48.978 } 00:06:48.978 10:45:05 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:48.978 10:45:05 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:48.978 10:45:05 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:48.978 10:45:05 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:48.978 10:45:05 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1899451 00:06:48.978 10:45:05 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 1899451 ']' 00:06:48.978 10:45:05 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 1899451 00:06:48.978 10:45:05 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:48.978 10:45:05 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:48.978 10:45:05 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1899451 00:06:48.978 10:45:05 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:48.978 10:45:05 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:48.978 10:45:05 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1899451' 00:06:48.978 killing process with pid 1899451 00:06:48.978 10:45:05 app_cmdline -- common/autotest_common.sh@967 -- # kill 1899451 00:06:48.978 10:45:05 app_cmdline -- common/autotest_common.sh@972 -- # wait 1899451 00:06:49.238 00:06:49.238 real 0m1.552s 00:06:49.238 user 0m1.876s 00:06:49.238 sys 0m0.414s 00:06:49.238 10:45:06 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.238 10:45:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:49.238 ************************************ 00:06:49.238 END TEST app_cmdline 00:06:49.238 ************************************ 00:06:49.238 10:45:06 -- common/autotest_common.sh@1142 -- # return 0 00:06:49.238 10:45:06 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:49.238 10:45:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.238 10:45:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.238 10:45:06 -- common/autotest_common.sh@10 -- # set +x 00:06:49.238 ************************************ 00:06:49.238 START TEST version 00:06:49.238 ************************************ 00:06:49.238 10:45:06 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:49.504 * Looking for test storage... 00:06:49.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:49.504 10:45:06 version -- app/version.sh@17 -- # get_header_version major 00:06:49.504 10:45:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:49.504 10:45:06 version -- app/version.sh@14 -- # cut -f2 00:06:49.504 10:45:06 version -- app/version.sh@14 -- # tr -d '"' 00:06:49.504 10:45:06 version -- app/version.sh@17 -- # major=24 00:06:49.504 10:45:06 version -- app/version.sh@18 -- # get_header_version minor 00:06:49.504 10:45:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:49.504 10:45:06 version -- app/version.sh@14 -- # cut -f2 00:06:49.504 10:45:06 version -- app/version.sh@14 -- # tr -d '"' 00:06:49.504 10:45:06 version -- app/version.sh@18 -- # minor=9 00:06:49.504 10:45:06 version -- app/version.sh@19 -- # get_header_version patch 00:06:49.504 10:45:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:49.504 10:45:06 version -- app/version.sh@14 -- # cut -f2 00:06:49.504 10:45:06 version -- app/version.sh@14 -- # tr -d '"' 00:06:49.504 10:45:06 version -- app/version.sh@19 -- # patch=0 00:06:49.504 10:45:06 version -- app/version.sh@20 -- # get_header_version suffix 00:06:49.504 10:45:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:49.504 10:45:06 version -- app/version.sh@14 -- # cut -f2 00:06:49.504 10:45:06 version -- app/version.sh@14 -- # tr -d '"' 00:06:49.504 10:45:06 version -- app/version.sh@20 -- # suffix=-pre 00:06:49.504 10:45:06 version -- app/version.sh@22 -- # version=24.9 00:06:49.504 10:45:06 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:49.504 10:45:06 version -- app/version.sh@28 -- # version=24.9rc0 00:06:49.504 10:45:06 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:49.504 10:45:06 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:49.504 10:45:06 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:49.504 10:45:06 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:49.504 00:06:49.504 real 0m0.171s 00:06:49.504 user 0m0.086s 00:06:49.504 sys 0m0.122s 00:06:49.504 10:45:06 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.504 10:45:06 version -- common/autotest_common.sh@10 -- # set +x 00:06:49.504 ************************************ 00:06:49.504 END TEST version 00:06:49.504 ************************************ 00:06:49.504 10:45:06 -- common/autotest_common.sh@1142 -- # return 0 00:06:49.504 10:45:06 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:49.504 10:45:06 -- spdk/autotest.sh@198 -- # uname -s 00:06:49.504 10:45:06 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:49.504 10:45:06 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:49.504 10:45:06 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:49.504 10:45:06 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:49.504 10:45:06 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:49.504 10:45:06 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:49.504 10:45:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:49.504 10:45:06 -- common/autotest_common.sh@10 -- # set +x 00:06:49.504 10:45:06 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:49.504 10:45:06 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:49.504 10:45:06 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:49.504 10:45:06 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:49.504 10:45:06 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:49.504 10:45:06 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:49.504 10:45:06 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:49.504 10:45:06 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:49.504 10:45:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.504 10:45:06 -- common/autotest_common.sh@10 -- # set +x 00:06:49.835 ************************************ 00:06:49.835 START TEST nvmf_tcp 00:06:49.835 ************************************ 00:06:49.835 10:45:06 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:49.835 * Looking for test storage... 00:06:49.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:49.835 10:45:06 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:49.835 10:45:06 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:49.835 10:45:06 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:49.835 10:45:06 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:49.835 10:45:06 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:49.835 10:45:06 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:49.835 10:45:06 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:49.835 10:45:06 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:49.835 10:45:06 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:49.835 10:45:06 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:49.835 10:45:06 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:49.835 10:45:06 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:49.835 10:45:06 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:49.835 10:45:06 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:49.835 10:45:06 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:49.835 10:45:06 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:49.835 10:45:06 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:49.835 10:45:06 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:49.835 10:45:06 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:49.835 10:45:06 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:49.836 10:45:06 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:49.836 10:45:06 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.836 10:45:06 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.836 10:45:06 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.836 10:45:06 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.836 10:45:06 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.836 10:45:06 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.836 10:45:06 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:49.836 10:45:06 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.836 10:45:06 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:49.836 10:45:06 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:49.836 10:45:06 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:49.836 10:45:06 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:49.836 10:45:06 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:49.836 10:45:06 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:49.836 10:45:06 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:49.836 10:45:06 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:49.836 10:45:06 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:49.836 10:45:06 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:49.836 10:45:06 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:49.836 10:45:06 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:49.836 10:45:06 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:49.836 10:45:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:49.836 10:45:06 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:49.836 10:45:06 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:49.836 10:45:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:49.836 10:45:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.836 10:45:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:49.836 ************************************ 00:06:49.836 START TEST nvmf_example 00:06:49.836 ************************************ 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:49.836 * Looking for test storage... 00:06:49.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:49.836 10:45:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:50.110 10:45:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:50.110 10:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:50.110 10:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:50.110 10:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:50.110 10:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:50.110 10:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:50.110 10:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:50.110 10:45:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:50.110 10:45:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:50.111 10:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:50.111 10:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:50.111 10:45:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:50.111 10:45:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:58.266 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:58.266 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:58.266 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:58.266 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:58.266 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:58.267 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:58.267 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:58.267 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:58.267 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:58.267 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:58.267 10:45:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:58.267 10:45:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:58.267 10:45:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:58.267 10:45:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:58.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:58.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.528 ms 00:06:58.267 00:06:58.267 --- 10.0.0.2 ping statistics --- 00:06:58.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.267 rtt min/avg/max/mdev = 0.528/0.528/0.528/0.000 ms 00:06:58.267 10:45:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:58.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:58.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.359 ms 00:06:58.267 00:06:58.267 --- 10.0.0.1 ping statistics --- 00:06:58.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.267 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:06:58.267 10:45:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:58.267 10:45:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:58.267 10:45:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:58.267 10:45:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:58.267 10:45:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:58.267 10:45:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:58.267 10:45:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:58.267 10:45:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:58.267 10:45:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:58.267 10:45:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:58.267 10:45:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:58.267 10:45:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:58.267 10:45:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:58.267 10:45:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:58.267 10:45:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:58.267 10:45:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1904119 00:06:58.267 10:45:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:58.267 10:45:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:58.267 10:45:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1904119 00:06:58.267 10:45:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 1904119 ']' 00:06:58.267 10:45:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.267 10:45:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.267 10:45:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.267 10:45:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.267 10:45:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:58.267 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.267 10:45:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.267 10:45:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:06:58.267 10:45:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:58.267 10:45:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:58.267 10:45:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:58.267 10:45:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:58.267 10:45:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.267 10:45:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:58.267 10:45:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.267 10:45:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:58.267 10:45:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.267 10:45:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:58.267 10:45:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.267 10:45:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:58.267 10:45:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:58.267 10:45:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.267 10:45:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:58.267 10:45:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.267 10:45:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:58.267 10:45:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:58.267 10:45:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.267 10:45:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:58.267 10:45:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.267 10:45:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:58.267 10:45:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.267 10:45:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:58.267 10:45:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.267 10:45:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:58.267 10:45:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:58.267 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.500 Initializing NVMe Controllers 00:07:10.500 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:10.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:10.500 Initialization complete. Launching workers. 00:07:10.500 ======================================================== 00:07:10.500 Latency(us) 00:07:10.500 Device Information : IOPS MiB/s Average min max 00:07:10.500 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18782.70 73.37 3408.03 608.20 15481.90 00:07:10.500 ======================================================== 00:07:10.500 Total : 18782.70 73.37 3408.03 608.20 15481.90 00:07:10.500 00:07:10.500 10:45:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:10.500 10:45:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:10.500 10:45:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:10.500 10:45:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:10.500 10:45:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:10.500 10:45:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:10.500 10:45:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:10.500 10:45:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:10.500 rmmod nvme_tcp 00:07:10.500 rmmod nvme_fabrics 00:07:10.500 rmmod nvme_keyring 00:07:10.500 10:45:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:10.500 10:45:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:10.500 10:45:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:10.500 10:45:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1904119 ']' 00:07:10.500 10:45:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1904119 00:07:10.500 10:45:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 1904119 ']' 00:07:10.500 10:45:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 1904119 00:07:10.500 10:45:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:10.500 10:45:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:10.500 10:45:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1904119 00:07:10.500 10:45:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:10.500 10:45:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:10.500 10:45:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1904119' 00:07:10.500 killing process with pid 1904119 00:07:10.500 10:45:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 1904119 00:07:10.500 10:45:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 1904119 00:07:10.500 nvmf threads initialize successfully 00:07:10.500 bdev subsystem init successfully 00:07:10.500 created a nvmf target service 00:07:10.500 create targets's poll groups done 00:07:10.500 all subsystems of target started 00:07:10.500 nvmf target is running 00:07:10.500 all subsystems of target stopped 00:07:10.500 destroy targets's poll groups done 00:07:10.500 destroyed the nvmf target service 00:07:10.500 bdev subsystem finish successfully 00:07:10.500 nvmf threads destroy successfully 00:07:10.500 10:45:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:10.500 10:45:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:10.500 10:45:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:10.500 10:45:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:10.500 10:45:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:10.500 10:45:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.500 10:45:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:10.500 10:45:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.759 10:45:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:10.759 10:45:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:10.759 10:45:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:10.759 10:45:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:10.759 00:07:10.759 real 0m21.011s 00:07:10.759 user 0m44.039s 00:07:10.759 sys 0m7.584s 00:07:10.759 10:45:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.759 10:45:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:10.759 ************************************ 00:07:10.759 END TEST nvmf_example 00:07:10.759 ************************************ 00:07:10.759 10:45:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:10.759 10:45:27 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:10.759 10:45:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:10.759 10:45:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.759 10:45:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:11.023 ************************************ 00:07:11.023 START TEST nvmf_filesystem 00:07:11.023 ************************************ 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:11.023 * Looking for test storage... 00:07:11.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:11.023 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:11.024 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:11.024 10:45:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:11.024 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:11.024 10:45:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:11.024 10:45:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:11.024 10:45:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:11.024 10:45:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:11.024 10:45:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:11.024 10:45:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:11.024 10:45:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:11.024 10:45:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:11.024 10:45:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:11.024 10:45:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:11.024 10:45:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:11.024 10:45:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:11.024 10:45:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:11.024 10:45:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:11.024 10:45:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:11.024 #define SPDK_CONFIG_H 00:07:11.024 #define SPDK_CONFIG_APPS 1 00:07:11.024 #define SPDK_CONFIG_ARCH native 00:07:11.024 #undef SPDK_CONFIG_ASAN 00:07:11.024 #undef SPDK_CONFIG_AVAHI 00:07:11.024 #undef SPDK_CONFIG_CET 00:07:11.024 #define SPDK_CONFIG_COVERAGE 1 00:07:11.024 #define SPDK_CONFIG_CROSS_PREFIX 00:07:11.024 #undef SPDK_CONFIG_CRYPTO 00:07:11.024 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:11.024 #undef SPDK_CONFIG_CUSTOMOCF 00:07:11.024 #undef SPDK_CONFIG_DAOS 00:07:11.024 #define SPDK_CONFIG_DAOS_DIR 00:07:11.024 #define SPDK_CONFIG_DEBUG 1 00:07:11.024 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:11.024 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:11.024 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:11.024 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:11.024 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:11.024 #undef SPDK_CONFIG_DPDK_UADK 00:07:11.024 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:11.024 #define SPDK_CONFIG_EXAMPLES 1 00:07:11.024 #undef SPDK_CONFIG_FC 00:07:11.024 #define SPDK_CONFIG_FC_PATH 00:07:11.024 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:11.024 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:11.024 #undef SPDK_CONFIG_FUSE 00:07:11.024 #undef SPDK_CONFIG_FUZZER 00:07:11.024 #define SPDK_CONFIG_FUZZER_LIB 00:07:11.024 #undef SPDK_CONFIG_GOLANG 00:07:11.024 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:11.024 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:11.024 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:11.024 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:11.024 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:11.024 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:11.024 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:11.024 #define SPDK_CONFIG_IDXD 1 00:07:11.024 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:11.024 #undef SPDK_CONFIG_IPSEC_MB 00:07:11.024 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:11.024 #define SPDK_CONFIG_ISAL 1 00:07:11.024 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:11.024 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:11.024 #define SPDK_CONFIG_LIBDIR 00:07:11.024 #undef SPDK_CONFIG_LTO 00:07:11.024 #define SPDK_CONFIG_MAX_LCORES 128 00:07:11.024 #define SPDK_CONFIG_NVME_CUSE 1 00:07:11.024 #undef SPDK_CONFIG_OCF 00:07:11.024 #define SPDK_CONFIG_OCF_PATH 00:07:11.024 #define SPDK_CONFIG_OPENSSL_PATH 00:07:11.024 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:11.024 #define SPDK_CONFIG_PGO_DIR 00:07:11.024 #undef SPDK_CONFIG_PGO_USE 00:07:11.024 #define SPDK_CONFIG_PREFIX /usr/local 00:07:11.024 #undef SPDK_CONFIG_RAID5F 00:07:11.024 #undef SPDK_CONFIG_RBD 00:07:11.024 #define SPDK_CONFIG_RDMA 1 00:07:11.024 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:11.024 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:11.024 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:11.024 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:11.024 #define SPDK_CONFIG_SHARED 1 00:07:11.024 #undef SPDK_CONFIG_SMA 00:07:11.024 #define SPDK_CONFIG_TESTS 1 00:07:11.024 #undef SPDK_CONFIG_TSAN 00:07:11.024 #define SPDK_CONFIG_UBLK 1 00:07:11.024 #define SPDK_CONFIG_UBSAN 1 00:07:11.024 #undef SPDK_CONFIG_UNIT_TESTS 00:07:11.024 #undef SPDK_CONFIG_URING 00:07:11.024 #define SPDK_CONFIG_URING_PATH 00:07:11.024 #undef SPDK_CONFIG_URING_ZNS 00:07:11.024 #undef SPDK_CONFIG_USDT 00:07:11.024 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:11.024 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:11.024 #define SPDK_CONFIG_VFIO_USER 1 00:07:11.024 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:11.024 #define SPDK_CONFIG_VHOST 1 00:07:11.024 #define SPDK_CONFIG_VIRTIO 1 00:07:11.024 #undef SPDK_CONFIG_VTUNE 00:07:11.024 #define SPDK_CONFIG_VTUNE_DIR 00:07:11.024 #define SPDK_CONFIG_WERROR 1 00:07:11.024 #define SPDK_CONFIG_WPDK_DIR 00:07:11.024 #undef SPDK_CONFIG_XNVME 00:07:11.024 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:11.024 10:45:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:11.024 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:11.024 10:45:27 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.024 10:45:27 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.024 10:45:27 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.024 10:45:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.024 10:45:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.024 10:45:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.024 10:45:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:11.024 10:45:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.024 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:11.024 10:45:27 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:11.024 10:45:27 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:11.024 10:45:27 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:11.025 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1906912 ]] 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1906912 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.enX4Kd 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.enX4Kd/tests/target /tmp/spdk.enX4Kd 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:11.026 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:11.027 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:11.027 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:11.027 10:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:11.027 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:11.027 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:11.027 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:11.027 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=954236928 00:07:11.027 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:11.027 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330192896 00:07:11.027 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:11.027 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:11.027 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:11.027 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=118681346048 00:07:11.027 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=129371013120 00:07:11.027 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10689667072 00:07:11.027 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:11.027 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:11.027 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:11.027 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64680796160 00:07:11.027 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685506560 00:07:11.027 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:07:11.027 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=25864503296 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874202624 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9699328 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=216064 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=287744 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64684474368 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685506560 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1032192 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12937097216 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937101312 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:11.288 * Looking for test storage... 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=118681346048 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=12904259584 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:11.288 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:11.289 10:45:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.431 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:19.431 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:19.431 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:19.431 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:19.432 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:19.432 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:19.432 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:19.432 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:19.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:19.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:07:19.432 00:07:19.432 --- 10.0.0.2 ping statistics --- 00:07:19.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.432 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:19.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:19.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.458 ms 00:07:19.432 00:07:19.432 --- 10.0.0.1 ping statistics --- 00:07:19.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.432 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.432 ************************************ 00:07:19.432 START TEST nvmf_filesystem_no_in_capsule 00:07:19.432 ************************************ 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:19.432 10:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:19.433 10:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.433 10:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1910773 00:07:19.433 10:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1910773 00:07:19.433 10:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:19.433 10:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1910773 ']' 00:07:19.433 10:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.433 10:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:19.433 10:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.433 10:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:19.433 10:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.433 [2024-07-12 10:45:35.483795] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:19.433 [2024-07-12 10:45:35.483853] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:19.433 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.433 [2024-07-12 10:45:35.573983] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:19.433 [2024-07-12 10:45:35.672775] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:19.433 [2024-07-12 10:45:35.672836] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:19.433 [2024-07-12 10:45:35.672844] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:19.433 [2024-07-12 10:45:35.672851] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:19.433 [2024-07-12 10:45:35.672857] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:19.433 [2024-07-12 10:45:35.673027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.433 [2024-07-12 10:45:35.673192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.433 [2024-07-12 10:45:35.673262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.433 [2024-07-12 10:45:35.673263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:19.433 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:19.433 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:19.433 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:19.433 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:19.433 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.433 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:19.433 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:19.433 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:19.433 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.433 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.433 [2024-07-12 10:45:36.336416] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.433 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.433 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:19.433 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.433 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.695 Malloc1 00:07:19.695 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.695 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:19.695 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.695 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.695 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.695 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:19.695 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.695 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.695 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.695 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:19.695 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.695 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.695 [2024-07-12 10:45:36.490517] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:19.695 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.695 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:19.695 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:19.695 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:19.695 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:19.695 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:19.695 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:19.695 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.695 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.695 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.695 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:19.695 { 00:07:19.695 "name": "Malloc1", 00:07:19.695 "aliases": [ 00:07:19.695 "dc489506-b1e8-48df-bbeb-bbe6e137feae" 00:07:19.695 ], 00:07:19.695 "product_name": "Malloc disk", 00:07:19.695 "block_size": 512, 00:07:19.695 "num_blocks": 1048576, 00:07:19.695 "uuid": "dc489506-b1e8-48df-bbeb-bbe6e137feae", 00:07:19.695 "assigned_rate_limits": { 00:07:19.695 "rw_ios_per_sec": 0, 00:07:19.695 "rw_mbytes_per_sec": 0, 00:07:19.695 "r_mbytes_per_sec": 0, 00:07:19.695 "w_mbytes_per_sec": 0 00:07:19.695 }, 00:07:19.695 "claimed": true, 00:07:19.695 "claim_type": "exclusive_write", 00:07:19.695 "zoned": false, 00:07:19.695 "supported_io_types": { 00:07:19.695 "read": true, 00:07:19.695 "write": true, 00:07:19.695 "unmap": true, 00:07:19.695 "flush": true, 00:07:19.695 "reset": true, 00:07:19.695 "nvme_admin": false, 00:07:19.695 "nvme_io": false, 00:07:19.695 "nvme_io_md": false, 00:07:19.695 "write_zeroes": true, 00:07:19.695 "zcopy": true, 00:07:19.695 "get_zone_info": false, 00:07:19.695 "zone_management": false, 00:07:19.695 "zone_append": false, 00:07:19.695 "compare": false, 00:07:19.695 "compare_and_write": false, 00:07:19.695 "abort": true, 00:07:19.695 "seek_hole": false, 00:07:19.695 "seek_data": false, 00:07:19.695 "copy": true, 00:07:19.695 "nvme_iov_md": false 00:07:19.695 }, 00:07:19.695 "memory_domains": [ 00:07:19.695 { 00:07:19.695 "dma_device_id": "system", 00:07:19.695 "dma_device_type": 1 00:07:19.695 }, 00:07:19.695 { 00:07:19.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.695 "dma_device_type": 2 00:07:19.695 } 00:07:19.695 ], 00:07:19.695 "driver_specific": {} 00:07:19.695 } 00:07:19.695 ]' 00:07:19.695 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:19.695 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:19.695 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:19.695 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:19.695 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:19.695 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:19.695 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:19.695 10:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:21.611 10:45:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:21.611 10:45:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:21.611 10:45:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:21.611 10:45:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:21.611 10:45:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:23.524 10:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:23.524 10:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:23.524 10:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:23.524 10:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:23.524 10:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:23.524 10:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:23.524 10:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:23.524 10:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:23.524 10:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:23.524 10:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:23.524 10:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:23.524 10:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:23.524 10:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:23.524 10:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:23.524 10:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:23.524 10:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:23.524 10:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:23.524 10:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:23.785 10:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:24.727 10:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:24.727 10:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:24.727 10:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:24.727 10:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.727 10:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:24.727 ************************************ 00:07:24.727 START TEST filesystem_ext4 00:07:24.727 ************************************ 00:07:24.727 10:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:24.727 10:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:24.727 10:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:24.727 10:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:24.727 10:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:24.727 10:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:24.727 10:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:24.727 10:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:24.727 10:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:24.727 10:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:24.727 10:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:24.727 mke2fs 1.46.5 (30-Dec-2021) 00:07:24.727 Discarding device blocks: 0/522240 done 00:07:24.727 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:24.727 Filesystem UUID: d5400d16-3f9f-40e9-b0c7-590c565e98d9 00:07:24.727 Superblock backups stored on blocks: 00:07:24.727 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:24.727 00:07:24.727 Allocating group tables: 0/64 done 00:07:24.727 Writing inode tables: 0/64 done 00:07:24.988 Creating journal (8192 blocks): done 00:07:24.988 Writing superblocks and filesystem accounting information: 0/64 done 00:07:24.988 00:07:24.988 10:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:24.988 10:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:25.248 10:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:25.248 10:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:25.248 10:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:25.248 10:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:25.248 10:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:25.249 10:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:25.249 10:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1910773 00:07:25.249 10:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:25.249 10:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:25.249 10:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:25.249 10:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:25.249 00:07:25.249 real 0m0.584s 00:07:25.249 user 0m0.022s 00:07:25.249 sys 0m0.074s 00:07:25.249 10:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.249 10:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:25.249 ************************************ 00:07:25.249 END TEST filesystem_ext4 00:07:25.249 ************************************ 00:07:25.249 10:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:25.249 10:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:25.249 10:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:25.249 10:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.249 10:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:25.509 ************************************ 00:07:25.509 START TEST filesystem_btrfs 00:07:25.509 ************************************ 00:07:25.509 10:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:25.509 10:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:25.509 10:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:25.509 10:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:25.509 10:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:25.509 10:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:25.509 10:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:25.509 10:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:25.509 10:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:25.509 10:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:25.509 10:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:25.770 btrfs-progs v6.6.2 00:07:25.770 See https://btrfs.readthedocs.io for more information. 00:07:25.770 00:07:25.770 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:25.770 NOTE: several default settings have changed in version 5.15, please make sure 00:07:25.770 this does not affect your deployments: 00:07:25.770 - DUP for metadata (-m dup) 00:07:25.770 - enabled no-holes (-O no-holes) 00:07:25.770 - enabled free-space-tree (-R free-space-tree) 00:07:25.770 00:07:25.770 Label: (null) 00:07:25.770 UUID: 1eb86f76-8d55-426d-ab78-35ee46e88950 00:07:25.770 Node size: 16384 00:07:25.770 Sector size: 4096 00:07:25.770 Filesystem size: 510.00MiB 00:07:25.770 Block group profiles: 00:07:25.770 Data: single 8.00MiB 00:07:25.770 Metadata: DUP 32.00MiB 00:07:25.770 System: DUP 8.00MiB 00:07:25.770 SSD detected: yes 00:07:25.770 Zoned device: no 00:07:25.770 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:25.770 Runtime features: free-space-tree 00:07:25.770 Checksum: crc32c 00:07:25.770 Number of devices: 1 00:07:25.770 Devices: 00:07:25.770 ID SIZE PATH 00:07:25.770 1 510.00MiB /dev/nvme0n1p1 00:07:25.770 00:07:25.770 10:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:25.770 10:45:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:26.031 10:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:26.290 10:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:26.290 10:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:26.290 10:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:26.290 10:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:26.290 10:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:26.290 10:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1910773 00:07:26.290 10:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:26.290 10:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:26.290 10:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:26.290 10:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:26.290 00:07:26.290 real 0m0.824s 00:07:26.290 user 0m0.035s 00:07:26.290 sys 0m0.123s 00:07:26.290 10:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.290 10:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:26.290 ************************************ 00:07:26.290 END TEST filesystem_btrfs 00:07:26.290 ************************************ 00:07:26.290 10:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:26.290 10:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:26.290 10:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:26.290 10:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.290 10:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:26.290 ************************************ 00:07:26.290 START TEST filesystem_xfs 00:07:26.290 ************************************ 00:07:26.290 10:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:26.290 10:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:26.290 10:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:26.290 10:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:26.290 10:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:26.290 10:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:26.290 10:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:26.290 10:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:26.290 10:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:26.290 10:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:26.290 10:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:26.290 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:26.290 = sectsz=512 attr=2, projid32bit=1 00:07:26.290 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:26.290 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:26.290 data = bsize=4096 blocks=130560, imaxpct=25 00:07:26.290 = sunit=0 swidth=0 blks 00:07:26.290 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:26.290 log =internal log bsize=4096 blocks=16384, version=2 00:07:26.290 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:26.290 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:27.232 Discarding blocks...Done. 00:07:27.232 10:45:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:27.232 10:45:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:29.145 10:45:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:29.145 10:45:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:29.145 10:45:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:29.145 10:45:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:29.145 10:45:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:29.145 10:45:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:29.145 10:45:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1910773 00:07:29.145 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:29.145 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:29.145 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:29.145 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:29.145 00:07:29.145 real 0m2.859s 00:07:29.145 user 0m0.028s 00:07:29.145 sys 0m0.073s 00:07:29.145 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.145 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:29.145 ************************************ 00:07:29.145 END TEST filesystem_xfs 00:07:29.145 ************************************ 00:07:29.145 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:29.145 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:29.145 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:29.145 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:29.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:29.405 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:29.405 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:29.405 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:29.405 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:29.405 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:29.405 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:29.405 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:29.405 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:29.405 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.405 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.405 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.405 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:29.405 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1910773 00:07:29.405 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1910773 ']' 00:07:29.405 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1910773 00:07:29.405 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:29.405 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:29.405 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1910773 00:07:29.405 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:29.405 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:29.405 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1910773' 00:07:29.405 killing process with pid 1910773 00:07:29.405 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 1910773 00:07:29.405 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 1910773 00:07:29.665 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:29.665 00:07:29.665 real 0m11.092s 00:07:29.665 user 0m43.496s 00:07:29.665 sys 0m1.233s 00:07:29.665 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.665 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.665 ************************************ 00:07:29.665 END TEST nvmf_filesystem_no_in_capsule 00:07:29.665 ************************************ 00:07:29.665 10:45:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:29.665 10:45:46 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:29.665 10:45:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:29.665 10:45:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.665 10:45:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:29.665 ************************************ 00:07:29.665 START TEST nvmf_filesystem_in_capsule 00:07:29.665 ************************************ 00:07:29.665 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:29.665 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:29.665 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:29.665 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:29.665 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:29.665 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.665 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1913024 00:07:29.665 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1913024 00:07:29.665 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:29.665 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1913024 ']' 00:07:29.665 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.665 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:29.665 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.665 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:29.665 10:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.926 [2024-07-12 10:45:46.654027] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:29.926 [2024-07-12 10:45:46.654077] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:29.926 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.926 [2024-07-12 10:45:46.736982] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:29.926 [2024-07-12 10:45:46.794075] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:29.926 [2024-07-12 10:45:46.794107] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:29.926 [2024-07-12 10:45:46.794113] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:29.926 [2024-07-12 10:45:46.794117] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:29.926 [2024-07-12 10:45:46.794121] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:29.926 [2024-07-12 10:45:46.794265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.926 [2024-07-12 10:45:46.794502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.926 [2024-07-12 10:45:46.794655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.926 [2024-07-12 10:45:46.794656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:30.498 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:30.498 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:30.498 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:30.498 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:30.498 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.498 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:30.498 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:30.498 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:30.498 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.498 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.498 [2024-07-12 10:45:47.474555] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.498 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.498 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:30.498 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.498 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.759 Malloc1 00:07:30.759 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.759 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:30.759 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.759 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.759 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.759 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:30.759 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.759 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.759 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.759 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:30.759 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.759 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.759 [2024-07-12 10:45:47.596711] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:30.759 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.759 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:30.759 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:30.759 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:30.759 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:30.759 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:30.759 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:30.759 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.759 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.759 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.759 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:30.759 { 00:07:30.759 "name": "Malloc1", 00:07:30.759 "aliases": [ 00:07:30.759 "6be315ad-f067-4eb7-b110-3829efcd0e75" 00:07:30.759 ], 00:07:30.759 "product_name": "Malloc disk", 00:07:30.759 "block_size": 512, 00:07:30.759 "num_blocks": 1048576, 00:07:30.759 "uuid": "6be315ad-f067-4eb7-b110-3829efcd0e75", 00:07:30.759 "assigned_rate_limits": { 00:07:30.759 "rw_ios_per_sec": 0, 00:07:30.759 "rw_mbytes_per_sec": 0, 00:07:30.759 "r_mbytes_per_sec": 0, 00:07:30.759 "w_mbytes_per_sec": 0 00:07:30.759 }, 00:07:30.759 "claimed": true, 00:07:30.759 "claim_type": "exclusive_write", 00:07:30.759 "zoned": false, 00:07:30.759 "supported_io_types": { 00:07:30.759 "read": true, 00:07:30.759 "write": true, 00:07:30.759 "unmap": true, 00:07:30.759 "flush": true, 00:07:30.759 "reset": true, 00:07:30.759 "nvme_admin": false, 00:07:30.759 "nvme_io": false, 00:07:30.759 "nvme_io_md": false, 00:07:30.759 "write_zeroes": true, 00:07:30.759 "zcopy": true, 00:07:30.759 "get_zone_info": false, 00:07:30.759 "zone_management": false, 00:07:30.759 "zone_append": false, 00:07:30.759 "compare": false, 00:07:30.759 "compare_and_write": false, 00:07:30.759 "abort": true, 00:07:30.760 "seek_hole": false, 00:07:30.760 "seek_data": false, 00:07:30.760 "copy": true, 00:07:30.760 "nvme_iov_md": false 00:07:30.760 }, 00:07:30.760 "memory_domains": [ 00:07:30.760 { 00:07:30.760 "dma_device_id": "system", 00:07:30.760 "dma_device_type": 1 00:07:30.760 }, 00:07:30.760 { 00:07:30.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.760 "dma_device_type": 2 00:07:30.760 } 00:07:30.760 ], 00:07:30.760 "driver_specific": {} 00:07:30.760 } 00:07:30.760 ]' 00:07:30.760 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:30.760 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:30.760 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:30.760 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:30.760 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:30.760 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:30.760 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:30.760 10:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:32.737 10:45:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:32.737 10:45:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:32.737 10:45:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:32.737 10:45:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:32.737 10:45:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:34.670 10:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:34.670 10:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:34.670 10:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:34.670 10:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:34.670 10:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:34.670 10:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:34.670 10:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:34.670 10:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:34.670 10:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:34.670 10:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:34.671 10:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:34.671 10:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:34.671 10:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:34.671 10:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:34.671 10:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:34.671 10:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:34.671 10:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:34.932 10:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:35.504 10:45:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:36.444 10:45:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:36.444 10:45:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:36.444 10:45:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:36.444 10:45:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.444 10:45:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.444 ************************************ 00:07:36.444 START TEST filesystem_in_capsule_ext4 00:07:36.444 ************************************ 00:07:36.444 10:45:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:36.444 10:45:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:36.444 10:45:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:36.444 10:45:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:36.444 10:45:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:36.444 10:45:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:36.444 10:45:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:36.444 10:45:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:36.444 10:45:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:36.444 10:45:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:36.444 10:45:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:36.444 mke2fs 1.46.5 (30-Dec-2021) 00:07:36.444 Discarding device blocks: 0/522240 done 00:07:36.444 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:36.444 Filesystem UUID: 854fe497-8a19-42f2-af9c-2806c6b2129f 00:07:36.444 Superblock backups stored on blocks: 00:07:36.444 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:36.444 00:07:36.444 Allocating group tables: 0/64 done 00:07:36.444 Writing inode tables: 0/64 done 00:07:38.985 Creating journal (8192 blocks): done 00:07:39.816 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:07:39.816 00:07:39.816 10:45:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:39.816 10:45:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:40.757 10:45:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:40.757 10:45:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:40.757 10:45:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:40.757 10:45:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:40.757 10:45:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:40.757 10:45:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:40.757 10:45:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1913024 00:07:40.757 10:45:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:40.757 10:45:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:40.757 10:45:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:40.757 10:45:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:40.757 00:07:40.757 real 0m4.437s 00:07:40.757 user 0m0.029s 00:07:40.757 sys 0m0.074s 00:07:40.758 10:45:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.758 10:45:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:40.758 ************************************ 00:07:40.758 END TEST filesystem_in_capsule_ext4 00:07:40.758 ************************************ 00:07:40.758 10:45:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:40.758 10:45:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:40.758 10:45:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:40.758 10:45:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.758 10:45:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.018 ************************************ 00:07:41.018 START TEST filesystem_in_capsule_btrfs 00:07:41.018 ************************************ 00:07:41.018 10:45:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:41.018 10:45:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:41.018 10:45:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:41.018 10:45:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:41.018 10:45:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:41.018 10:45:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:41.018 10:45:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:41.018 10:45:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:41.018 10:45:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:41.018 10:45:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:41.018 10:45:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:41.018 btrfs-progs v6.6.2 00:07:41.018 See https://btrfs.readthedocs.io for more information. 00:07:41.018 00:07:41.018 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:41.018 NOTE: several default settings have changed in version 5.15, please make sure 00:07:41.018 this does not affect your deployments: 00:07:41.018 - DUP for metadata (-m dup) 00:07:41.018 - enabled no-holes (-O no-holes) 00:07:41.018 - enabled free-space-tree (-R free-space-tree) 00:07:41.018 00:07:41.018 Label: (null) 00:07:41.018 UUID: 293ede3c-9e8c-4c9f-8dac-06eca322a00b 00:07:41.018 Node size: 16384 00:07:41.018 Sector size: 4096 00:07:41.018 Filesystem size: 510.00MiB 00:07:41.018 Block group profiles: 00:07:41.018 Data: single 8.00MiB 00:07:41.018 Metadata: DUP 32.00MiB 00:07:41.018 System: DUP 8.00MiB 00:07:41.018 SSD detected: yes 00:07:41.018 Zoned device: no 00:07:41.018 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:41.018 Runtime features: free-space-tree 00:07:41.018 Checksum: crc32c 00:07:41.018 Number of devices: 1 00:07:41.018 Devices: 00:07:41.018 ID SIZE PATH 00:07:41.018 1 510.00MiB /dev/nvme0n1p1 00:07:41.018 00:07:41.018 10:45:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:41.018 10:45:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:42.403 10:45:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:42.403 10:45:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:42.403 10:45:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:42.403 10:45:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:42.403 10:45:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:42.403 10:45:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:42.403 10:45:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1913024 00:07:42.403 10:45:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:42.403 10:45:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:42.403 10:45:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:42.403 10:45:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:42.403 00:07:42.403 real 0m1.268s 00:07:42.403 user 0m0.028s 00:07:42.403 sys 0m0.134s 00:07:42.403 10:45:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.403 10:45:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:42.403 ************************************ 00:07:42.403 END TEST filesystem_in_capsule_btrfs 00:07:42.403 ************************************ 00:07:42.403 10:45:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:42.403 10:45:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:42.403 10:45:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:42.403 10:45:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.403 10:45:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:42.403 ************************************ 00:07:42.403 START TEST filesystem_in_capsule_xfs 00:07:42.403 ************************************ 00:07:42.403 10:45:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:42.403 10:45:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:42.403 10:45:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:42.403 10:45:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:42.403 10:45:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:42.403 10:45:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:42.403 10:45:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:42.403 10:45:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:42.403 10:45:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:42.403 10:45:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:42.403 10:45:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:42.403 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:42.403 = sectsz=512 attr=2, projid32bit=1 00:07:42.403 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:42.403 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:42.403 data = bsize=4096 blocks=130560, imaxpct=25 00:07:42.403 = sunit=0 swidth=0 blks 00:07:42.403 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:42.403 log =internal log bsize=4096 blocks=16384, version=2 00:07:42.403 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:42.403 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:43.347 Discarding blocks...Done. 00:07:43.347 10:46:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:43.347 10:46:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:45.891 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:45.891 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:45.891 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:45.891 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:45.891 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:45.891 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:45.891 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1913024 00:07:45.891 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:45.891 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:45.891 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:45.891 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:45.891 00:07:45.891 real 0m3.601s 00:07:45.891 user 0m0.023s 00:07:45.891 sys 0m0.081s 00:07:45.891 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.891 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:45.891 ************************************ 00:07:45.891 END TEST filesystem_in_capsule_xfs 00:07:45.891 ************************************ 00:07:45.891 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:45.891 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:45.891 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:45.891 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:46.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:46.152 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:46.152 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:46.152 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:46.152 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:46.152 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:46.152 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:46.152 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:46.152 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:46.152 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.152 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:46.152 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.152 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:46.152 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1913024 00:07:46.152 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1913024 ']' 00:07:46.152 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1913024 00:07:46.152 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:46.152 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:46.152 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1913024 00:07:46.152 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:46.152 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:46.152 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1913024' 00:07:46.152 killing process with pid 1913024 00:07:46.152 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 1913024 00:07:46.152 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 1913024 00:07:46.413 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:46.413 00:07:46.413 real 0m16.675s 00:07:46.413 user 1m5.937s 00:07:46.413 sys 0m1.241s 00:07:46.413 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.413 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:46.413 ************************************ 00:07:46.413 END TEST nvmf_filesystem_in_capsule 00:07:46.413 ************************************ 00:07:46.413 10:46:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:46.413 10:46:03 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:46.413 10:46:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:46.413 10:46:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:46.413 10:46:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:46.413 10:46:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:46.413 10:46:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:46.413 10:46:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:46.413 rmmod nvme_tcp 00:07:46.413 rmmod nvme_fabrics 00:07:46.413 rmmod nvme_keyring 00:07:46.413 10:46:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:46.413 10:46:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:46.413 10:46:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:46.413 10:46:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:46.413 10:46:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:46.413 10:46:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:46.413 10:46:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:46.413 10:46:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:46.413 10:46:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:46.413 10:46:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.413 10:46:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:46.413 10:46:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.960 10:46:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:48.960 00:07:48.960 real 0m37.684s 00:07:48.960 user 1m51.718s 00:07:48.960 sys 0m8.041s 00:07:48.960 10:46:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.960 10:46:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:48.960 ************************************ 00:07:48.960 END TEST nvmf_filesystem 00:07:48.960 ************************************ 00:07:48.960 10:46:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:48.960 10:46:05 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:48.960 10:46:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:48.960 10:46:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.960 10:46:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:48.960 ************************************ 00:07:48.960 START TEST nvmf_target_discovery 00:07:48.960 ************************************ 00:07:48.960 10:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:48.960 * Looking for test storage... 00:07:48.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:48.960 10:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:48.960 10:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:48.960 10:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:48.960 10:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:48.960 10:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:48.960 10:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:48.960 10:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:48.960 10:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:48.960 10:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:48.960 10:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:48.960 10:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:48.960 10:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:48.960 10:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:48.960 10:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:48.960 10:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:48.960 10:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:48.960 10:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:48.960 10:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:48.960 10:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:48.960 10:46:05 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:48.960 10:46:05 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.960 10:46:05 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.960 10:46:05 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.960 10:46:05 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.961 10:46:05 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.961 10:46:05 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:48.961 10:46:05 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.961 10:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:48.961 10:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:48.961 10:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:48.961 10:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:48.961 10:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:48.961 10:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:48.961 10:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:48.961 10:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:48.961 10:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:48.961 10:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:48.961 10:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:48.961 10:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:48.961 10:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:48.961 10:46:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:48.961 10:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:48.961 10:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:48.961 10:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:48.961 10:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:48.961 10:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:48.961 10:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.961 10:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:48.961 10:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.961 10:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:48.961 10:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:48.961 10:46:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:48.961 10:46:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:57.106 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:57.106 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:57.106 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:57.107 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:57.107 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:57.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:57.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.503 ms 00:07:57.107 00:07:57.107 --- 10.0.0.2 ping statistics --- 00:07:57.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.107 rtt min/avg/max/mdev = 0.503/0.503/0.503/0.000 ms 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:57.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:57.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:07:57.107 00:07:57.107 --- 10.0.0.1 ping statistics --- 00:07:57.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.107 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:57.107 10:46:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:57.107 10:46:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:57.107 10:46:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:57.107 10:46:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:57.107 10:46:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.107 10:46:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1920775 00:07:57.107 10:46:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1920775 00:07:57.107 10:46:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:57.107 10:46:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 1920775 ']' 00:07:57.107 10:46:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.107 10:46:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:57.107 10:46:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.107 10:46:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:57.107 10:46:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.107 [2024-07-12 10:46:13.100910] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:57.107 [2024-07-12 10:46:13.100971] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:57.107 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.107 [2024-07-12 10:46:13.187770] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:57.107 [2024-07-12 10:46:13.284625] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:57.107 [2024-07-12 10:46:13.284683] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:57.107 [2024-07-12 10:46:13.284691] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:57.107 [2024-07-12 10:46:13.284698] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:57.107 [2024-07-12 10:46:13.284704] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:57.107 [2024-07-12 10:46:13.284868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.107 [2024-07-12 10:46:13.285016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.107 [2024-07-12 10:46:13.285186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.107 [2024-07-12 10:46:13.285185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:57.107 10:46:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:57.107 10:46:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:07:57.107 10:46:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:57.107 10:46:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:57.107 10:46:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.107 10:46:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:57.107 10:46:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:57.107 10:46:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.107 10:46:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.107 [2024-07-12 10:46:13.957394] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:57.107 10:46:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.107 10:46:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:57.107 10:46:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:57.107 10:46:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:57.107 10:46:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.107 10:46:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.107 Null1 00:07:57.107 10:46:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.108 10:46:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:57.108 10:46:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.108 10:46:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.108 10:46:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.108 10:46:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:57.108 10:46:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.108 10:46:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.108 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.108 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:57.108 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.108 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.108 [2024-07-12 10:46:14.017786] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:57.108 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.108 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:57.108 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:57.108 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.108 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.108 Null2 00:07:57.108 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.108 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:57.108 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.108 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.108 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.108 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:57.108 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.108 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.108 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.108 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:57.108 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.108 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.108 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.108 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:57.108 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:57.108 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.108 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.108 Null3 00:07:57.108 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.108 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:57.108 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.108 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.369 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.369 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:57.369 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.369 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.369 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.369 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:57.369 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.369 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.369 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.369 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:57.369 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:57.369 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.369 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.369 Null4 00:07:57.369 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.369 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:57.369 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.369 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.369 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.369 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:57.369 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.369 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.370 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.370 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:57.370 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.370 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.370 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.370 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:57.370 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.370 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.370 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.370 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:57.370 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.370 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.370 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.370 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:07:57.370 00:07:57.370 Discovery Log Number of Records 6, Generation counter 6 00:07:57.370 =====Discovery Log Entry 0====== 00:07:57.370 trtype: tcp 00:07:57.370 adrfam: ipv4 00:07:57.370 subtype: current discovery subsystem 00:07:57.370 treq: not required 00:07:57.370 portid: 0 00:07:57.370 trsvcid: 4420 00:07:57.370 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:57.370 traddr: 10.0.0.2 00:07:57.370 eflags: explicit discovery connections, duplicate discovery information 00:07:57.370 sectype: none 00:07:57.370 =====Discovery Log Entry 1====== 00:07:57.370 trtype: tcp 00:07:57.370 adrfam: ipv4 00:07:57.370 subtype: nvme subsystem 00:07:57.370 treq: not required 00:07:57.370 portid: 0 00:07:57.370 trsvcid: 4420 00:07:57.370 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:57.370 traddr: 10.0.0.2 00:07:57.370 eflags: none 00:07:57.370 sectype: none 00:07:57.370 =====Discovery Log Entry 2====== 00:07:57.370 trtype: tcp 00:07:57.370 adrfam: ipv4 00:07:57.370 subtype: nvme subsystem 00:07:57.370 treq: not required 00:07:57.370 portid: 0 00:07:57.370 trsvcid: 4420 00:07:57.370 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:57.370 traddr: 10.0.0.2 00:07:57.370 eflags: none 00:07:57.370 sectype: none 00:07:57.370 =====Discovery Log Entry 3====== 00:07:57.370 trtype: tcp 00:07:57.370 adrfam: ipv4 00:07:57.370 subtype: nvme subsystem 00:07:57.370 treq: not required 00:07:57.370 portid: 0 00:07:57.370 trsvcid: 4420 00:07:57.370 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:57.370 traddr: 10.0.0.2 00:07:57.370 eflags: none 00:07:57.370 sectype: none 00:07:57.370 =====Discovery Log Entry 4====== 00:07:57.370 trtype: tcp 00:07:57.370 adrfam: ipv4 00:07:57.370 subtype: nvme subsystem 00:07:57.370 treq: not required 00:07:57.370 portid: 0 00:07:57.370 trsvcid: 4420 00:07:57.370 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:57.370 traddr: 10.0.0.2 00:07:57.370 eflags: none 00:07:57.370 sectype: none 00:07:57.370 =====Discovery Log Entry 5====== 00:07:57.370 trtype: tcp 00:07:57.370 adrfam: ipv4 00:07:57.370 subtype: discovery subsystem referral 00:07:57.370 treq: not required 00:07:57.370 portid: 0 00:07:57.370 trsvcid: 4430 00:07:57.370 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:57.370 traddr: 10.0.0.2 00:07:57.370 eflags: none 00:07:57.370 sectype: none 00:07:57.370 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:57.370 Perform nvmf subsystem discovery via RPC 00:07:57.370 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:57.370 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.370 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.370 [ 00:07:57.370 { 00:07:57.370 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:57.370 "subtype": "Discovery", 00:07:57.370 "listen_addresses": [ 00:07:57.370 { 00:07:57.370 "trtype": "TCP", 00:07:57.370 "adrfam": "IPv4", 00:07:57.370 "traddr": "10.0.0.2", 00:07:57.370 "trsvcid": "4420" 00:07:57.370 } 00:07:57.370 ], 00:07:57.370 "allow_any_host": true, 00:07:57.370 "hosts": [] 00:07:57.370 }, 00:07:57.370 { 00:07:57.370 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:57.370 "subtype": "NVMe", 00:07:57.370 "listen_addresses": [ 00:07:57.370 { 00:07:57.370 "trtype": "TCP", 00:07:57.370 "adrfam": "IPv4", 00:07:57.370 "traddr": "10.0.0.2", 00:07:57.370 "trsvcid": "4420" 00:07:57.370 } 00:07:57.370 ], 00:07:57.370 "allow_any_host": true, 00:07:57.370 "hosts": [], 00:07:57.370 "serial_number": "SPDK00000000000001", 00:07:57.370 "model_number": "SPDK bdev Controller", 00:07:57.370 "max_namespaces": 32, 00:07:57.370 "min_cntlid": 1, 00:07:57.370 "max_cntlid": 65519, 00:07:57.370 "namespaces": [ 00:07:57.370 { 00:07:57.370 "nsid": 1, 00:07:57.370 "bdev_name": "Null1", 00:07:57.370 "name": "Null1", 00:07:57.370 "nguid": "D501207DC2894A7BADA677434EDCAC1C", 00:07:57.370 "uuid": "d501207d-c289-4a7b-ada6-77434edcac1c" 00:07:57.370 } 00:07:57.370 ] 00:07:57.370 }, 00:07:57.370 { 00:07:57.370 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:57.370 "subtype": "NVMe", 00:07:57.370 "listen_addresses": [ 00:07:57.370 { 00:07:57.370 "trtype": "TCP", 00:07:57.370 "adrfam": "IPv4", 00:07:57.370 "traddr": "10.0.0.2", 00:07:57.370 "trsvcid": "4420" 00:07:57.370 } 00:07:57.370 ], 00:07:57.370 "allow_any_host": true, 00:07:57.370 "hosts": [], 00:07:57.370 "serial_number": "SPDK00000000000002", 00:07:57.370 "model_number": "SPDK bdev Controller", 00:07:57.370 "max_namespaces": 32, 00:07:57.370 "min_cntlid": 1, 00:07:57.370 "max_cntlid": 65519, 00:07:57.370 "namespaces": [ 00:07:57.370 { 00:07:57.370 "nsid": 1, 00:07:57.370 "bdev_name": "Null2", 00:07:57.370 "name": "Null2", 00:07:57.370 "nguid": "D124F904B34545898E863F7CD9163F0B", 00:07:57.370 "uuid": "d124f904-b345-4589-8e86-3f7cd9163f0b" 00:07:57.370 } 00:07:57.370 ] 00:07:57.370 }, 00:07:57.370 { 00:07:57.370 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:57.370 "subtype": "NVMe", 00:07:57.370 "listen_addresses": [ 00:07:57.370 { 00:07:57.370 "trtype": "TCP", 00:07:57.370 "adrfam": "IPv4", 00:07:57.370 "traddr": "10.0.0.2", 00:07:57.370 "trsvcid": "4420" 00:07:57.370 } 00:07:57.370 ], 00:07:57.370 "allow_any_host": true, 00:07:57.370 "hosts": [], 00:07:57.370 "serial_number": "SPDK00000000000003", 00:07:57.370 "model_number": "SPDK bdev Controller", 00:07:57.370 "max_namespaces": 32, 00:07:57.370 "min_cntlid": 1, 00:07:57.370 "max_cntlid": 65519, 00:07:57.370 "namespaces": [ 00:07:57.370 { 00:07:57.370 "nsid": 1, 00:07:57.370 "bdev_name": "Null3", 00:07:57.370 "name": "Null3", 00:07:57.370 "nguid": "8095DDD1C746427AA8035A1265FC1F4B", 00:07:57.370 "uuid": "8095ddd1-c746-427a-a803-5a1265fc1f4b" 00:07:57.371 } 00:07:57.371 ] 00:07:57.371 }, 00:07:57.371 { 00:07:57.371 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:57.371 "subtype": "NVMe", 00:07:57.371 "listen_addresses": [ 00:07:57.371 { 00:07:57.371 "trtype": "TCP", 00:07:57.371 "adrfam": "IPv4", 00:07:57.371 "traddr": "10.0.0.2", 00:07:57.371 "trsvcid": "4420" 00:07:57.371 } 00:07:57.371 ], 00:07:57.371 "allow_any_host": true, 00:07:57.371 "hosts": [], 00:07:57.371 "serial_number": "SPDK00000000000004", 00:07:57.371 "model_number": "SPDK bdev Controller", 00:07:57.371 "max_namespaces": 32, 00:07:57.371 "min_cntlid": 1, 00:07:57.371 "max_cntlid": 65519, 00:07:57.371 "namespaces": [ 00:07:57.371 { 00:07:57.371 "nsid": 1, 00:07:57.371 "bdev_name": "Null4", 00:07:57.371 "name": "Null4", 00:07:57.371 "nguid": "FEDF2F9268E64129B35EE1D100BB788D", 00:07:57.371 "uuid": "fedf2f92-68e6-4129-b35e-e1d100bb788d" 00:07:57.371 } 00:07:57.371 ] 00:07:57.371 } 00:07:57.371 ] 00:07:57.371 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.371 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:57.371 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:57.371 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:57.371 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.371 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.371 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.371 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:57.371 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.371 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:57.633 rmmod nvme_tcp 00:07:57.633 rmmod nvme_fabrics 00:07:57.633 rmmod nvme_keyring 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1920775 ']' 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1920775 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 1920775 ']' 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 1920775 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1920775 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1920775' 00:07:57.633 killing process with pid 1920775 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 1920775 00:07:57.633 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 1920775 00:07:57.894 10:46:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:57.894 10:46:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:57.894 10:46:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:57.894 10:46:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:57.894 10:46:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:57.894 10:46:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.894 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:57.894 10:46:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.443 10:46:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:00.443 00:08:00.443 real 0m11.317s 00:08:00.443 user 0m8.194s 00:08:00.443 sys 0m5.903s 00:08:00.443 10:46:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.443 10:46:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:00.443 ************************************ 00:08:00.443 END TEST nvmf_target_discovery 00:08:00.443 ************************************ 00:08:00.443 10:46:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:00.443 10:46:16 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:00.443 10:46:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:00.443 10:46:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.443 10:46:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:00.443 ************************************ 00:08:00.443 START TEST nvmf_referrals 00:08:00.443 ************************************ 00:08:00.443 10:46:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:00.443 * Looking for test storage... 00:08:00.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:00.443 10:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:00.443 10:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:00.443 10:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.443 10:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.443 10:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.443 10:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:00.444 10:46:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:08.588 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:08.588 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:08.588 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:08.588 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:08.588 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:08.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:08.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:08:08.589 00:08:08.589 --- 10.0.0.2 ping statistics --- 00:08:08.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.589 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:08.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:08.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.373 ms 00:08:08.589 00:08:08.589 --- 10.0.0.1 ping statistics --- 00:08:08.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.589 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1925293 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1925293 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 1925293 ']' 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:08.589 10:46:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:08.589 [2024-07-12 10:46:24.506756] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:08.589 [2024-07-12 10:46:24.506818] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.589 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.589 [2024-07-12 10:46:24.595244] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:08.589 [2024-07-12 10:46:24.691033] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:08.589 [2024-07-12 10:46:24.691095] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:08.589 [2024-07-12 10:46:24.691104] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:08.589 [2024-07-12 10:46:24.691111] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:08.589 [2024-07-12 10:46:24.691117] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:08.589 [2024-07-12 10:46:24.691292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.589 [2024-07-12 10:46:24.691445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:08.589 [2024-07-12 10:46:24.691608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.589 [2024-07-12 10:46:24.691608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:08.589 [2024-07-12 10:46:25.358418] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:08.589 [2024-07-12 10:46:25.374773] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:08.589 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:08.851 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:08.851 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:08.851 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:08.851 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.851 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:08.851 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.851 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:08.851 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.851 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:08.851 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.851 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:08.851 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.851 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:08.851 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.851 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:08.851 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:08.851 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.851 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:08.851 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.851 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:08.851 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:08.851 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:08.851 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:08.851 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:08.851 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:08.851 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:09.112 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:09.112 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:09.112 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:09.112 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.112 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.112 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.112 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:09.112 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.112 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.112 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.112 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:09.112 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:09.112 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:09.112 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:09.112 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.112 10:46:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.112 10:46:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:09.112 10:46:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.112 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:09.112 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:09.112 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:09.112 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:09.112 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:09.112 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:09.112 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:09.112 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:09.373 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:09.373 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:09.373 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:09.373 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:09.373 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:09.373 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:09.373 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:09.373 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:09.373 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:09.373 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:09.373 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:09.373 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:09.373 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:09.638 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:09.638 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:09.638 10:46:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.638 10:46:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.638 10:46:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.638 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:09.638 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:09.638 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:09.638 10:46:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.638 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:09.638 10:46:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.638 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:09.638 10:46:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.638 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:09.638 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:09.638 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:09.638 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:09.638 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:09.638 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:09.638 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:09.638 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:09.638 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:09.638 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:09.638 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:09.638 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:09.638 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:09.638 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:09.638 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:09.946 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:09.946 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:09.946 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:09.946 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:09.946 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:09.946 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:09.946 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:09.946 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:09.946 10:46:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.946 10:46:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.946 10:46:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.946 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:09.946 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:09.946 10:46:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.946 10:46:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.946 10:46:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.946 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:09.946 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:09.946 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:09.946 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:09.946 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:09.946 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:09.946 10:46:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:10.224 10:46:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:10.224 10:46:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:10.224 10:46:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:10.224 10:46:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:10.224 10:46:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:10.224 10:46:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:10.224 10:46:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:10.224 10:46:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:10.224 10:46:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:10.224 10:46:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:10.224 rmmod nvme_tcp 00:08:10.224 rmmod nvme_fabrics 00:08:10.224 rmmod nvme_keyring 00:08:10.224 10:46:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:10.224 10:46:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:10.224 10:46:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:10.224 10:46:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1925293 ']' 00:08:10.224 10:46:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1925293 00:08:10.224 10:46:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 1925293 ']' 00:08:10.224 10:46:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 1925293 00:08:10.224 10:46:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:10.224 10:46:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:10.224 10:46:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1925293 00:08:10.224 10:46:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:10.224 10:46:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:10.224 10:46:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1925293' 00:08:10.224 killing process with pid 1925293 00:08:10.224 10:46:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 1925293 00:08:10.224 10:46:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 1925293 00:08:10.485 10:46:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:10.485 10:46:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:10.485 10:46:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:10.485 10:46:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:10.485 10:46:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:10.485 10:46:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.485 10:46:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:10.485 10:46:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.397 10:46:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:12.397 00:08:12.397 real 0m12.427s 00:08:12.397 user 0m13.338s 00:08:12.397 sys 0m6.264s 00:08:12.397 10:46:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.397 10:46:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.397 ************************************ 00:08:12.397 END TEST nvmf_referrals 00:08:12.397 ************************************ 00:08:12.658 10:46:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:12.658 10:46:29 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:12.658 10:46:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:12.658 10:46:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.658 10:46:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:12.658 ************************************ 00:08:12.658 START TEST nvmf_connect_disconnect 00:08:12.658 ************************************ 00:08:12.658 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:12.658 * Looking for test storage... 00:08:12.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:12.658 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:12.658 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:12.658 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:12.658 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:12.658 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:12.658 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:12.658 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:12.658 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:12.658 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:12.658 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:12.658 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:12.658 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:12.658 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:12.658 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:12.658 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:12.658 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:12.658 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:12.658 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:12.658 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:12.658 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.658 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.658 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.658 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.658 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.659 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.659 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:12.659 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.659 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:12.659 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:12.659 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:12.659 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:12.659 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:12.659 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:12.659 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:12.659 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:12.659 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:12.659 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:12.659 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:12.659 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:12.659 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:12.659 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:12.659 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:12.659 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:12.659 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:12.659 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.659 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:12.659 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.659 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:12.659 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:12.659 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:12.659 10:46:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:20.807 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:20.807 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:20.807 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:20.808 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:20.808 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:20.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:20.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.460 ms 00:08:20.808 00:08:20.808 --- 10.0.0.2 ping statistics --- 00:08:20.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.808 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:20.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:20.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.389 ms 00:08:20.808 00:08:20.808 --- 10.0.0.1 ping statistics --- 00:08:20.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.808 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1930061 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1930061 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 1930061 ']' 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:20.808 10:46:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:20.808 [2024-07-12 10:46:36.953099] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:20.808 [2024-07-12 10:46:36.953166] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:20.808 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.808 [2024-07-12 10:46:37.039738] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:20.808 [2024-07-12 10:46:37.135401] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:20.808 [2024-07-12 10:46:37.135460] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:20.808 [2024-07-12 10:46:37.135469] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:20.808 [2024-07-12 10:46:37.135476] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:20.808 [2024-07-12 10:46:37.135482] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:20.808 [2024-07-12 10:46:37.135650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.808 [2024-07-12 10:46:37.135795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:20.808 [2024-07-12 10:46:37.135957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.808 [2024-07-12 10:46:37.135957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:20.808 10:46:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:20.809 10:46:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:20.809 10:46:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:20.809 10:46:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:20.809 10:46:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:21.070 10:46:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:21.070 10:46:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:21.070 10:46:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.070 10:46:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:21.070 [2024-07-12 10:46:37.806489] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:21.070 10:46:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.070 10:46:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:21.070 10:46:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.070 10:46:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:21.070 10:46:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.070 10:46:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:21.070 10:46:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:21.070 10:46:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.070 10:46:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:21.070 10:46:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.070 10:46:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:21.070 10:46:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.070 10:46:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:21.070 10:46:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.070 10:46:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:21.070 10:46:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.070 10:46:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:21.070 [2024-07-12 10:46:37.872200] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:21.070 10:46:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.070 10:46:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:21.070 10:46:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:21.070 10:46:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:25.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.885 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.388 10:46:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:39.388 10:46:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:39.388 10:46:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:39.388 10:46:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:39.388 10:46:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:39.388 10:46:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:39.388 10:46:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:39.388 10:46:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:39.388 rmmod nvme_tcp 00:08:39.388 rmmod nvme_fabrics 00:08:39.388 rmmod nvme_keyring 00:08:39.388 10:46:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:39.388 10:46:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:39.388 10:46:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:39.388 10:46:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1930061 ']' 00:08:39.388 10:46:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1930061 00:08:39.388 10:46:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1930061 ']' 00:08:39.388 10:46:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 1930061 00:08:39.388 10:46:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:08:39.388 10:46:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:39.388 10:46:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1930061 00:08:39.388 10:46:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:39.388 10:46:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:39.388 10:46:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1930061' 00:08:39.388 killing process with pid 1930061 00:08:39.388 10:46:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 1930061 00:08:39.388 10:46:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 1930061 00:08:39.388 10:46:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:39.388 10:46:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:39.388 10:46:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:39.388 10:46:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:39.388 10:46:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:39.388 10:46:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.388 10:46:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:39.388 10:46:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.957 10:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:41.957 00:08:41.957 real 0m28.942s 00:08:41.957 user 1m18.496s 00:08:41.957 sys 0m6.775s 00:08:41.957 10:46:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:41.957 10:46:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:41.957 ************************************ 00:08:41.957 END TEST nvmf_connect_disconnect 00:08:41.957 ************************************ 00:08:41.957 10:46:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:41.957 10:46:58 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:41.957 10:46:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:41.957 10:46:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.957 10:46:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:41.957 ************************************ 00:08:41.957 START TEST nvmf_multitarget 00:08:41.957 ************************************ 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:41.957 * Looking for test storage... 00:08:41.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:41.957 10:46:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:50.097 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:50.097 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:50.097 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:50.097 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:50.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:50.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.438 ms 00:08:50.097 00:08:50.097 --- 10.0.0.2 ping statistics --- 00:08:50.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.097 rtt min/avg/max/mdev = 0.438/0.438/0.438/0.000 ms 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:50.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:50.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.375 ms 00:08:50.097 00:08:50.097 --- 10.0.0.1 ping statistics --- 00:08:50.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.097 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:50.097 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:50.098 10:47:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:50.098 10:47:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:50.098 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1938171 00:08:50.098 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1938171 00:08:50.098 10:47:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:50.098 10:47:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 1938171 ']' 00:08:50.098 10:47:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.098 10:47:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:50.098 10:47:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.098 10:47:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:50.098 10:47:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:50.098 [2024-07-12 10:47:05.979372] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:50.098 [2024-07-12 10:47:05.979438] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.098 EAL: No free 2048 kB hugepages reported on node 1 00:08:50.098 [2024-07-12 10:47:06.065452] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:50.098 [2024-07-12 10:47:06.161364] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.098 [2024-07-12 10:47:06.161423] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.098 [2024-07-12 10:47:06.161432] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:50.098 [2024-07-12 10:47:06.161438] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:50.098 [2024-07-12 10:47:06.161444] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.098 [2024-07-12 10:47:06.161609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.098 [2024-07-12 10:47:06.161768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:50.098 [2024-07-12 10:47:06.161930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.098 [2024-07-12 10:47:06.161930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:50.098 10:47:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:50.098 10:47:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:08:50.098 10:47:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:50.098 10:47:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:50.098 10:47:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:50.098 10:47:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:50.098 10:47:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:50.098 10:47:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:50.098 10:47:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:50.098 10:47:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:50.098 10:47:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:50.098 "nvmf_tgt_1" 00:08:50.098 10:47:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:50.358 "nvmf_tgt_2" 00:08:50.358 10:47:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:50.358 10:47:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:50.358 10:47:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:50.358 10:47:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:50.620 true 00:08:50.620 10:47:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:50.620 true 00:08:50.620 10:47:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:50.620 10:47:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:50.620 10:47:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:50.620 10:47:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:50.620 10:47:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:50.620 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:50.620 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:50.620 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:50.620 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:50.620 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:50.620 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:50.620 rmmod nvme_tcp 00:08:50.881 rmmod nvme_fabrics 00:08:50.881 rmmod nvme_keyring 00:08:50.881 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:50.881 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:50.881 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:50.881 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1938171 ']' 00:08:50.881 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1938171 00:08:50.881 10:47:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 1938171 ']' 00:08:50.881 10:47:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 1938171 00:08:50.881 10:47:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:08:50.881 10:47:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:50.881 10:47:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1938171 00:08:50.881 10:47:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:50.881 10:47:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:50.881 10:47:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1938171' 00:08:50.881 killing process with pid 1938171 00:08:50.881 10:47:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 1938171 00:08:50.881 10:47:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 1938171 00:08:51.142 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:51.142 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:51.142 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:51.142 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:51.142 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:51.142 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.142 10:47:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:51.142 10:47:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.157 10:47:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:53.157 00:08:53.157 real 0m11.491s 00:08:53.157 user 0m9.682s 00:08:53.157 sys 0m5.985s 00:08:53.157 10:47:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:53.157 10:47:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:53.157 ************************************ 00:08:53.157 END TEST nvmf_multitarget 00:08:53.157 ************************************ 00:08:53.157 10:47:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:53.157 10:47:10 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:53.157 10:47:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:53.157 10:47:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:53.158 10:47:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:53.158 ************************************ 00:08:53.158 START TEST nvmf_rpc 00:08:53.158 ************************************ 00:08:53.158 10:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:53.158 * Looking for test storage... 00:08:53.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:08:53.419 10:47:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:01.569 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:01.569 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:01.569 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:01.569 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:01.570 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:01.570 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:01.570 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.536 ms 00:09:01.570 00:09:01.570 --- 10.0.0.2 ping statistics --- 00:09:01.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.570 rtt min/avg/max/mdev = 0.536/0.536/0.536/0.000 ms 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:01.570 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:01.570 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.365 ms 00:09:01.570 00:09:01.570 --- 10.0.0.1 ping statistics --- 00:09:01.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.570 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1942568 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1942568 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 1942568 ']' 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:01.570 10:47:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.570 [2024-07-12 10:47:17.567005] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:01.570 [2024-07-12 10:47:17.567073] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.570 EAL: No free 2048 kB hugepages reported on node 1 00:09:01.570 [2024-07-12 10:47:17.659346] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:01.570 [2024-07-12 10:47:17.759073] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.570 [2024-07-12 10:47:17.759142] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.570 [2024-07-12 10:47:17.759152] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.570 [2024-07-12 10:47:17.759160] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.570 [2024-07-12 10:47:17.759166] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.570 [2024-07-12 10:47:17.759261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.570 [2024-07-12 10:47:17.759455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:01.570 [2024-07-12 10:47:17.759617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:01.570 [2024-07-12 10:47:17.759618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.570 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:01.570 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:01.570 10:47:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:01.570 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:01.570 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.570 10:47:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.570 10:47:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:01.570 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.570 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.570 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.570 10:47:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:09:01.570 "tick_rate": 2400000000, 00:09:01.570 "poll_groups": [ 00:09:01.570 { 00:09:01.570 "name": "nvmf_tgt_poll_group_000", 00:09:01.570 "admin_qpairs": 0, 00:09:01.570 "io_qpairs": 0, 00:09:01.570 "current_admin_qpairs": 0, 00:09:01.570 "current_io_qpairs": 0, 00:09:01.570 "pending_bdev_io": 0, 00:09:01.570 "completed_nvme_io": 0, 00:09:01.570 "transports": [] 00:09:01.570 }, 00:09:01.570 { 00:09:01.570 "name": "nvmf_tgt_poll_group_001", 00:09:01.570 "admin_qpairs": 0, 00:09:01.570 "io_qpairs": 0, 00:09:01.570 "current_admin_qpairs": 0, 00:09:01.570 "current_io_qpairs": 0, 00:09:01.570 "pending_bdev_io": 0, 00:09:01.570 "completed_nvme_io": 0, 00:09:01.570 "transports": [] 00:09:01.570 }, 00:09:01.570 { 00:09:01.570 "name": "nvmf_tgt_poll_group_002", 00:09:01.570 "admin_qpairs": 0, 00:09:01.570 "io_qpairs": 0, 00:09:01.570 "current_admin_qpairs": 0, 00:09:01.570 "current_io_qpairs": 0, 00:09:01.570 "pending_bdev_io": 0, 00:09:01.570 "completed_nvme_io": 0, 00:09:01.570 "transports": [] 00:09:01.570 }, 00:09:01.570 { 00:09:01.570 "name": "nvmf_tgt_poll_group_003", 00:09:01.570 "admin_qpairs": 0, 00:09:01.570 "io_qpairs": 0, 00:09:01.570 "current_admin_qpairs": 0, 00:09:01.570 "current_io_qpairs": 0, 00:09:01.570 "pending_bdev_io": 0, 00:09:01.570 "completed_nvme_io": 0, 00:09:01.570 "transports": [] 00:09:01.570 } 00:09:01.570 ] 00:09:01.570 }' 00:09:01.570 10:47:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:01.570 10:47:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:01.570 10:47:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:01.570 10:47:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:01.570 10:47:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:01.570 10:47:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:01.570 10:47:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:01.570 10:47:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:01.570 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.570 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.570 [2024-07-12 10:47:18.530715] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:01.570 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.570 10:47:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:01.570 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.570 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.832 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.832 10:47:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:09:01.832 "tick_rate": 2400000000, 00:09:01.832 "poll_groups": [ 00:09:01.832 { 00:09:01.832 "name": "nvmf_tgt_poll_group_000", 00:09:01.832 "admin_qpairs": 0, 00:09:01.832 "io_qpairs": 0, 00:09:01.832 "current_admin_qpairs": 0, 00:09:01.832 "current_io_qpairs": 0, 00:09:01.832 "pending_bdev_io": 0, 00:09:01.832 "completed_nvme_io": 0, 00:09:01.832 "transports": [ 00:09:01.832 { 00:09:01.832 "trtype": "TCP" 00:09:01.832 } 00:09:01.832 ] 00:09:01.832 }, 00:09:01.832 { 00:09:01.832 "name": "nvmf_tgt_poll_group_001", 00:09:01.832 "admin_qpairs": 0, 00:09:01.832 "io_qpairs": 0, 00:09:01.832 "current_admin_qpairs": 0, 00:09:01.832 "current_io_qpairs": 0, 00:09:01.832 "pending_bdev_io": 0, 00:09:01.832 "completed_nvme_io": 0, 00:09:01.832 "transports": [ 00:09:01.832 { 00:09:01.832 "trtype": "TCP" 00:09:01.832 } 00:09:01.832 ] 00:09:01.832 }, 00:09:01.832 { 00:09:01.832 "name": "nvmf_tgt_poll_group_002", 00:09:01.832 "admin_qpairs": 0, 00:09:01.832 "io_qpairs": 0, 00:09:01.832 "current_admin_qpairs": 0, 00:09:01.832 "current_io_qpairs": 0, 00:09:01.832 "pending_bdev_io": 0, 00:09:01.832 "completed_nvme_io": 0, 00:09:01.832 "transports": [ 00:09:01.832 { 00:09:01.832 "trtype": "TCP" 00:09:01.832 } 00:09:01.832 ] 00:09:01.832 }, 00:09:01.832 { 00:09:01.832 "name": "nvmf_tgt_poll_group_003", 00:09:01.832 "admin_qpairs": 0, 00:09:01.832 "io_qpairs": 0, 00:09:01.832 "current_admin_qpairs": 0, 00:09:01.832 "current_io_qpairs": 0, 00:09:01.832 "pending_bdev_io": 0, 00:09:01.832 "completed_nvme_io": 0, 00:09:01.832 "transports": [ 00:09:01.832 { 00:09:01.832 "trtype": "TCP" 00:09:01.832 } 00:09:01.832 ] 00:09:01.832 } 00:09:01.832 ] 00:09:01.832 }' 00:09:01.832 10:47:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:01.832 10:47:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:01.832 10:47:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:01.832 10:47:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:01.832 10:47:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:01.832 10:47:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:01.832 10:47:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:01.832 10:47:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:01.832 10:47:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:01.832 10:47:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:01.832 10:47:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:09:01.832 10:47:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:01.832 10:47:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:01.832 10:47:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:01.832 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.832 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.832 Malloc1 00:09:01.832 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.832 10:47:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:01.832 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.832 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.832 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.832 10:47:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:01.832 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.832 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.832 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.832 10:47:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:01.832 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.832 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.832 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.832 10:47:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:01.832 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.832 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.832 [2024-07-12 10:47:18.724943] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:01.832 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.833 10:47:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:09:01.833 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:01.833 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:09:01.833 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:01.833 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:01.833 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:01.833 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:01.833 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:01.833 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:01.833 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:01.833 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:01.833 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:09:01.833 [2024-07-12 10:47:18.761972] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:09:01.833 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:01.833 could not add new controller: failed to write to nvme-fabrics device 00:09:01.833 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:01.833 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:01.833 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:01.833 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:01.833 10:47:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:01.833 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.833 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.833 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.833 10:47:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:03.767 10:47:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:03.767 10:47:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:03.767 10:47:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:03.767 10:47:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:03.767 10:47:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:05.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:05.681 [2024-07-12 10:47:22.508655] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:09:05.681 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:05.681 could not add new controller: failed to write to nvme-fabrics device 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.681 10:47:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:07.591 10:47:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:07.591 10:47:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:07.591 10:47:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:07.591 10:47:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:07.591 10:47:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:09.505 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:09.505 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:09.505 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:09.505 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:09.505 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:09.505 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:09.505 10:47:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:09.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.505 10:47:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:09.505 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:09.505 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:09.505 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:09.505 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:09.505 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:09.505 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:09.505 10:47:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:09.505 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.505 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.505 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.505 10:47:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:09:09.505 10:47:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:09.505 10:47:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:09.505 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.505 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.505 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.505 10:47:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:09.505 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.505 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.505 [2024-07-12 10:47:26.262938] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:09.505 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.505 10:47:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:09.505 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.505 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.505 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.505 10:47:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:09.505 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.505 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.505 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.505 10:47:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:10.887 10:47:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:10.887 10:47:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:10.887 10:47:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:10.887 10:47:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:10.887 10:47:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:12.796 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:12.796 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:12.796 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:13.055 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:13.055 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:13.055 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:13.055 10:47:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:13.055 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.055 10:47:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:13.055 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:13.055 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:13.055 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:13.056 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:13.056 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:13.056 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:13.056 10:47:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:13.056 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.056 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.056 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.056 10:47:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:13.056 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.056 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.056 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.056 10:47:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:13.056 10:47:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:13.056 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.056 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.056 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.056 10:47:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:13.056 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.056 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.056 [2024-07-12 10:47:29.937603] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:13.056 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.056 10:47:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:13.056 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.056 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.056 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.056 10:47:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:13.056 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.056 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.056 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.056 10:47:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:14.966 10:47:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:14.966 10:47:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:14.966 10:47:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:14.966 10:47:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:14.966 10:47:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:16.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.877 [2024-07-12 10:47:33.684612] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.877 10:47:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:18.362 10:47:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:18.362 10:47:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:18.362 10:47:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:18.362 10:47:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:18.362 10:47:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:20.276 10:47:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:20.276 10:47:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:20.276 10:47:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:20.276 10:47:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:20.276 10:47:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:20.276 10:47:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:20.276 10:47:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:20.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.536 10:47:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:20.536 10:47:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:20.536 10:47:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:20.536 10:47:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.536 10:47:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:20.536 10:47:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.536 10:47:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:20.536 10:47:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:20.536 10:47:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.536 10:47:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.536 10:47:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.536 10:47:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:20.536 10:47:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.536 10:47:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.536 10:47:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.536 10:47:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:20.536 10:47:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:20.536 10:47:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.536 10:47:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.536 10:47:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.536 10:47:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.536 10:47:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.536 10:47:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.536 [2024-07-12 10:47:37.393221] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.536 10:47:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.536 10:47:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:20.536 10:47:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.536 10:47:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.536 10:47:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.536 10:47:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:20.536 10:47:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.536 10:47:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.536 10:47:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.536 10:47:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:21.920 10:47:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:21.920 10:47:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:21.920 10:47:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:21.920 10:47:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:21.920 10:47:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:24.464 10:47:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:24.464 10:47:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:24.464 10:47:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:24.464 10:47:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:24.464 10:47:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:24.464 10:47:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:24.464 10:47:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:24.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.464 10:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:24.464 10:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:24.464 10:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:24.464 10:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:24.464 10:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:24.464 10:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:24.464 10:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:24.464 10:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:24.464 10:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.464 10:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.464 10:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.464 10:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.464 10:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.464 10:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.464 10:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.464 10:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:24.464 10:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:24.464 10:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.464 10:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.464 10:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.464 10:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.464 10:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.464 10:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.464 [2024-07-12 10:47:41.101893] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.464 10:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.464 10:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:24.464 10:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.464 10:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.464 10:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.464 10:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:24.464 10:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.464 10:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.464 10:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.464 10:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:25.848 10:47:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:25.848 10:47:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:25.848 10:47:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:25.848 10:47:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:25.848 10:47:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:27.759 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:27.759 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:27.759 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:27.759 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:27.759 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:27.759 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:27.759 10:47:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:27.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.759 10:47:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:27.759 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.020 [2024-07-12 10:47:44.815291] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.020 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.021 [2024-07-12 10:47:44.875411] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.021 [2024-07-12 10:47:44.939598] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.021 10:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.021 [2024-07-12 10:47:44.999794] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.282 [2024-07-12 10:47:45.059983] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:28.282 "tick_rate": 2400000000, 00:09:28.282 "poll_groups": [ 00:09:28.282 { 00:09:28.282 "name": "nvmf_tgt_poll_group_000", 00:09:28.282 "admin_qpairs": 0, 00:09:28.282 "io_qpairs": 224, 00:09:28.282 "current_admin_qpairs": 0, 00:09:28.282 "current_io_qpairs": 0, 00:09:28.282 "pending_bdev_io": 0, 00:09:28.282 "completed_nvme_io": 228, 00:09:28.282 "transports": [ 00:09:28.282 { 00:09:28.282 "trtype": "TCP" 00:09:28.282 } 00:09:28.282 ] 00:09:28.282 }, 00:09:28.282 { 00:09:28.282 "name": "nvmf_tgt_poll_group_001", 00:09:28.282 "admin_qpairs": 1, 00:09:28.282 "io_qpairs": 223, 00:09:28.282 "current_admin_qpairs": 0, 00:09:28.282 "current_io_qpairs": 0, 00:09:28.282 "pending_bdev_io": 0, 00:09:28.282 "completed_nvme_io": 251, 00:09:28.282 "transports": [ 00:09:28.282 { 00:09:28.282 "trtype": "TCP" 00:09:28.282 } 00:09:28.282 ] 00:09:28.282 }, 00:09:28.282 { 00:09:28.282 "name": "nvmf_tgt_poll_group_002", 00:09:28.282 "admin_qpairs": 6, 00:09:28.282 "io_qpairs": 218, 00:09:28.282 "current_admin_qpairs": 0, 00:09:28.282 "current_io_qpairs": 0, 00:09:28.282 "pending_bdev_io": 0, 00:09:28.282 "completed_nvme_io": 427, 00:09:28.282 "transports": [ 00:09:28.282 { 00:09:28.282 "trtype": "TCP" 00:09:28.282 } 00:09:28.282 ] 00:09:28.282 }, 00:09:28.282 { 00:09:28.282 "name": "nvmf_tgt_poll_group_003", 00:09:28.282 "admin_qpairs": 0, 00:09:28.282 "io_qpairs": 224, 00:09:28.282 "current_admin_qpairs": 0, 00:09:28.282 "current_io_qpairs": 0, 00:09:28.282 "pending_bdev_io": 0, 00:09:28.282 "completed_nvme_io": 333, 00:09:28.282 "transports": [ 00:09:28.282 { 00:09:28.282 "trtype": "TCP" 00:09:28.282 } 00:09:28.282 ] 00:09:28.282 } 00:09:28.282 ] 00:09:28.282 }' 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:28.282 10:47:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:28.283 rmmod nvme_tcp 00:09:28.283 rmmod nvme_fabrics 00:09:28.283 rmmod nvme_keyring 00:09:28.543 10:47:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:28.543 10:47:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:28.543 10:47:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:28.543 10:47:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1942568 ']' 00:09:28.543 10:47:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1942568 00:09:28.543 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 1942568 ']' 00:09:28.543 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 1942568 00:09:28.543 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:09:28.543 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:28.543 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1942568 00:09:28.543 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:28.543 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:28.543 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1942568' 00:09:28.543 killing process with pid 1942568 00:09:28.543 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 1942568 00:09:28.543 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 1942568 00:09:28.543 10:47:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:28.543 10:47:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:28.543 10:47:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:28.543 10:47:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:28.543 10:47:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:28.543 10:47:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.543 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:28.543 10:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.085 10:47:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:31.085 00:09:31.085 real 0m37.502s 00:09:31.085 user 1m52.833s 00:09:31.085 sys 0m7.407s 00:09:31.085 10:47:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:31.085 10:47:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.085 ************************************ 00:09:31.085 END TEST nvmf_rpc 00:09:31.085 ************************************ 00:09:31.085 10:47:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:31.085 10:47:47 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:31.085 10:47:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:31.085 10:47:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:31.085 10:47:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:31.085 ************************************ 00:09:31.085 START TEST nvmf_invalid 00:09:31.085 ************************************ 00:09:31.085 10:47:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:31.085 * Looking for test storage... 00:09:31.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:31.085 10:47:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:31.085 10:47:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:31.085 10:47:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:31.085 10:47:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:31.085 10:47:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:31.085 10:47:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:31.085 10:47:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:31.085 10:47:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:31.085 10:47:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:31.085 10:47:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:31.085 10:47:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:31.085 10:47:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:31.085 10:47:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:31.085 10:47:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:31.085 10:47:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:31.085 10:47:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:31.085 10:47:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:31.085 10:47:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:31.085 10:47:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:31.085 10:47:47 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.085 10:47:47 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.085 10:47:47 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.086 10:47:47 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.086 10:47:47 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.086 10:47:47 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.086 10:47:47 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:31.086 10:47:47 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.086 10:47:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:31.086 10:47:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:31.086 10:47:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:31.086 10:47:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:31.086 10:47:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:31.086 10:47:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:31.086 10:47:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:31.086 10:47:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:31.086 10:47:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:31.086 10:47:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:31.086 10:47:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:31.086 10:47:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:31.086 10:47:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:31.086 10:47:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:31.086 10:47:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:31.086 10:47:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:31.086 10:47:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:31.086 10:47:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:31.086 10:47:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:31.086 10:47:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:31.086 10:47:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.086 10:47:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:31.086 10:47:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.086 10:47:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:31.086 10:47:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:31.086 10:47:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:31.086 10:47:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:39.226 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:39.226 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:39.226 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:39.226 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:39.226 10:47:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:39.226 10:47:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:39.226 10:47:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:39.226 10:47:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:39.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:39.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.518 ms 00:09:39.226 00:09:39.226 --- 10.0.0.2 ping statistics --- 00:09:39.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.227 rtt min/avg/max/mdev = 0.518/0.518/0.518/0.000 ms 00:09:39.227 10:47:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:39.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:39.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.358 ms 00:09:39.227 00:09:39.227 --- 10.0.0.1 ping statistics --- 00:09:39.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.227 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:09:39.227 10:47:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:39.227 10:47:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:39.227 10:47:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:39.227 10:47:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:39.227 10:47:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:39.227 10:47:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:39.227 10:47:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:39.227 10:47:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:39.227 10:47:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:39.227 10:47:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:39.227 10:47:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:39.227 10:47:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:39.227 10:47:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:39.227 10:47:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1952426 00:09:39.227 10:47:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1952426 00:09:39.227 10:47:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:39.227 10:47:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 1952426 ']' 00:09:39.227 10:47:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.227 10:47:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:39.227 10:47:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.227 10:47:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:39.227 10:47:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:39.227 [2024-07-12 10:47:55.172593] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:39.227 [2024-07-12 10:47:55.172656] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.227 EAL: No free 2048 kB hugepages reported on node 1 00:09:39.227 [2024-07-12 10:47:55.260665] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:39.227 [2024-07-12 10:47:55.356295] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:39.227 [2024-07-12 10:47:55.356350] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:39.227 [2024-07-12 10:47:55.356359] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:39.227 [2024-07-12 10:47:55.356365] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:39.227 [2024-07-12 10:47:55.356371] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:39.227 [2024-07-12 10:47:55.356545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:39.227 [2024-07-12 10:47:55.356708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:39.227 [2024-07-12 10:47:55.356875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.227 [2024-07-12 10:47:55.356876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:39.227 10:47:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:39.227 10:47:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:09:39.227 10:47:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:39.227 10:47:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:39.227 10:47:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:39.227 10:47:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:39.227 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:39.227 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode25673 00:09:39.227 [2024-07-12 10:47:56.167819] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:39.227 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:39.227 { 00:09:39.227 "nqn": "nqn.2016-06.io.spdk:cnode25673", 00:09:39.227 "tgt_name": "foobar", 00:09:39.227 "method": "nvmf_create_subsystem", 00:09:39.227 "req_id": 1 00:09:39.227 } 00:09:39.227 Got JSON-RPC error response 00:09:39.227 response: 00:09:39.227 { 00:09:39.227 "code": -32603, 00:09:39.227 "message": "Unable to find target foobar" 00:09:39.227 }' 00:09:39.227 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:39.227 { 00:09:39.227 "nqn": "nqn.2016-06.io.spdk:cnode25673", 00:09:39.227 "tgt_name": "foobar", 00:09:39.227 "method": "nvmf_create_subsystem", 00:09:39.227 "req_id": 1 00:09:39.227 } 00:09:39.227 Got JSON-RPC error response 00:09:39.227 response: 00:09:39.227 { 00:09:39.227 "code": -32603, 00:09:39.227 "message": "Unable to find target foobar" 00:09:39.227 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:39.227 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:39.487 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode31269 00:09:39.487 [2024-07-12 10:47:56.360566] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31269: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:39.487 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:39.487 { 00:09:39.487 "nqn": "nqn.2016-06.io.spdk:cnode31269", 00:09:39.487 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:39.487 "method": "nvmf_create_subsystem", 00:09:39.487 "req_id": 1 00:09:39.487 } 00:09:39.487 Got JSON-RPC error response 00:09:39.487 response: 00:09:39.487 { 00:09:39.487 "code": -32602, 00:09:39.487 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:39.487 }' 00:09:39.487 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:39.487 { 00:09:39.487 "nqn": "nqn.2016-06.io.spdk:cnode31269", 00:09:39.487 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:39.487 "method": "nvmf_create_subsystem", 00:09:39.487 "req_id": 1 00:09:39.487 } 00:09:39.487 Got JSON-RPC error response 00:09:39.487 response: 00:09:39.487 { 00:09:39.487 "code": -32602, 00:09:39.487 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:39.487 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:39.487 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:39.487 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode24044 00:09:39.748 [2024-07-12 10:47:56.553315] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24044: invalid model number 'SPDK_Controller' 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:39.748 { 00:09:39.748 "nqn": "nqn.2016-06.io.spdk:cnode24044", 00:09:39.748 "model_number": "SPDK_Controller\u001f", 00:09:39.748 "method": "nvmf_create_subsystem", 00:09:39.748 "req_id": 1 00:09:39.748 } 00:09:39.748 Got JSON-RPC error response 00:09:39.748 response: 00:09:39.748 { 00:09:39.748 "code": -32602, 00:09:39.748 "message": "Invalid MN SPDK_Controller\u001f" 00:09:39.748 }' 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:39.748 { 00:09:39.748 "nqn": "nqn.2016-06.io.spdk:cnode24044", 00:09:39.748 "model_number": "SPDK_Controller\u001f", 00:09:39.748 "method": "nvmf_create_subsystem", 00:09:39.748 "req_id": 1 00:09:39.748 } 00:09:39.748 Got JSON-RPC error response 00:09:39.748 response: 00:09:39.748 { 00:09:39.748 "code": -32602, 00:09:39.748 "message": "Invalid MN SPDK_Controller\u001f" 00:09:39.748 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.748 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.009 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:09:40.010 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:09:40.010 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:09:40.010 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.010 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.010 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:09:40.010 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:09:40.010 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:09:40.010 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.010 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.010 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:09:40.010 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:09:40.010 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:09:40.010 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.010 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.010 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:09:40.010 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:09:40.010 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:09:40.010 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.010 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.010 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ? == \- ]] 00:09:40.010 10:47:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '?iR$/|Vwr46TW,{LxB&zE`X,{LxB&zE`X,{LxB&zE`X,{LxB&zE`X,{LxB&zE`X /dev/null' 00:09:42.350 10:47:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.894 10:48:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:44.894 00:09:44.894 real 0m13.645s 00:09:44.894 user 0m19.601s 00:09:44.894 sys 0m6.524s 00:09:44.894 10:48:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:44.894 10:48:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:44.894 ************************************ 00:09:44.894 END TEST nvmf_invalid 00:09:44.894 ************************************ 00:09:44.894 10:48:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:44.894 10:48:01 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:44.894 10:48:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:44.894 10:48:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:44.894 10:48:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:44.894 ************************************ 00:09:44.894 START TEST nvmf_abort 00:09:44.894 ************************************ 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:44.894 * Looking for test storage... 00:09:44.894 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:44.894 10:48:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:53.031 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:53.031 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:53.031 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.031 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:53.032 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:53.032 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:53.032 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:09:53.032 00:09:53.032 --- 10.0.0.2 ping statistics --- 00:09:53.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.032 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:53.032 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:53.032 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.380 ms 00:09:53.032 00:09:53.032 --- 10.0.0.1 ping statistics --- 00:09:53.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.032 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1957703 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1957703 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 1957703 ']' 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:53.032 10:48:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:53.032 [2024-07-12 10:48:08.915855] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:53.032 [2024-07-12 10:48:08.915918] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.032 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.032 [2024-07-12 10:48:09.002996] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:53.032 [2024-07-12 10:48:09.097763] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:53.032 [2024-07-12 10:48:09.097821] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:53.032 [2024-07-12 10:48:09.097830] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:53.032 [2024-07-12 10:48:09.097837] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:53.032 [2024-07-12 10:48:09.097843] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:53.032 [2024-07-12 10:48:09.098008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:53.032 [2024-07-12 10:48:09.098185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:53.032 [2024-07-12 10:48:09.098231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.032 10:48:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:53.032 10:48:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:09:53.032 10:48:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:53.032 10:48:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:53.032 10:48:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:53.032 10:48:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:53.032 10:48:09 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:53.032 10:48:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.032 10:48:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:53.032 [2024-07-12 10:48:09.755725] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:53.032 10:48:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.032 10:48:09 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:53.032 10:48:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.032 10:48:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:53.032 Malloc0 00:09:53.032 10:48:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.032 10:48:09 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:53.032 10:48:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.032 10:48:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:53.032 Delay0 00:09:53.032 10:48:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.032 10:48:09 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:53.032 10:48:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.032 10:48:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:53.032 10:48:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.032 10:48:09 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:53.032 10:48:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.032 10:48:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:53.032 10:48:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.032 10:48:09 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:53.032 10:48:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.032 10:48:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:53.032 [2024-07-12 10:48:09.844728] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:53.032 10:48:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.032 10:48:09 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:53.032 10:48:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.032 10:48:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:53.032 10:48:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.032 10:48:09 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:53.032 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.032 [2024-07-12 10:48:09.974981] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:55.579 Initializing NVMe Controllers 00:09:55.579 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:55.579 controller IO queue size 128 less than required 00:09:55.579 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:55.579 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:55.579 Initialization complete. Launching workers. 00:09:55.579 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 32504 00:09:55.579 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32569, failed to submit 62 00:09:55.579 success 32508, unsuccess 61, failed 0 00:09:55.579 10:48:12 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:55.579 10:48:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.579 10:48:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:55.579 10:48:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.580 10:48:12 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:55.580 10:48:12 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:55.580 10:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:55.580 10:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:55.580 10:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:55.580 10:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:55.580 10:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:55.580 10:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:55.580 rmmod nvme_tcp 00:09:55.580 rmmod nvme_fabrics 00:09:55.580 rmmod nvme_keyring 00:09:55.580 10:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:55.580 10:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:55.580 10:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:55.580 10:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1957703 ']' 00:09:55.580 10:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1957703 00:09:55.580 10:48:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 1957703 ']' 00:09:55.580 10:48:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 1957703 00:09:55.580 10:48:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:09:55.580 10:48:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:55.580 10:48:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1957703 00:09:55.580 10:48:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:55.580 10:48:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:55.580 10:48:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1957703' 00:09:55.580 killing process with pid 1957703 00:09:55.580 10:48:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 1957703 00:09:55.580 10:48:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 1957703 00:09:55.580 10:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:55.580 10:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:55.580 10:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:55.580 10:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:55.580 10:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:55.580 10:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.580 10:48:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:55.580 10:48:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.495 10:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:57.495 00:09:57.495 real 0m13.024s 00:09:57.495 user 0m13.539s 00:09:57.495 sys 0m6.315s 00:09:57.495 10:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:57.495 10:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:57.495 ************************************ 00:09:57.495 END TEST nvmf_abort 00:09:57.495 ************************************ 00:09:57.495 10:48:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:57.495 10:48:14 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:57.495 10:48:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:57.495 10:48:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:57.495 10:48:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:57.495 ************************************ 00:09:57.495 START TEST nvmf_ns_hotplug_stress 00:09:57.495 ************************************ 00:09:57.495 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:57.756 * Looking for test storage... 00:09:57.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:57.756 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:57.756 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:57.756 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.756 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.756 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.756 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.756 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.756 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.756 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.756 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.756 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.756 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:57.757 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:05.901 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:05.901 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:05.901 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:05.901 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:05.901 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:05.902 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:05.902 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:05.902 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:05.902 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:05.902 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:05.902 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:05.902 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:05.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:05.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.360 ms 00:10:05.902 00:10:05.902 --- 10.0.0.2 ping statistics --- 00:10:05.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.902 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:10:05.902 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:05.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:05.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.429 ms 00:10:05.902 00:10:05.902 --- 10.0.0.1 ping statistics --- 00:10:05.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.902 rtt min/avg/max/mdev = 0.429/0.429/0.429/0.000 ms 00:10:05.902 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:05.902 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:10:05.902 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:05.902 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:05.902 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:05.902 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:05.902 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:05.902 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:05.902 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:05.902 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:05.902 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:05.902 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:05.902 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:05.902 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1962893 00:10:05.902 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1962893 00:10:05.902 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:05.902 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 1962893 ']' 00:10:05.902 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.902 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:05.902 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.902 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:05.902 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:05.902 [2024-07-12 10:48:21.991000] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:05.902 [2024-07-12 10:48:21.991066] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:05.902 EAL: No free 2048 kB hugepages reported on node 1 00:10:05.902 [2024-07-12 10:48:22.077272] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:05.902 [2024-07-12 10:48:22.171024] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:05.902 [2024-07-12 10:48:22.171085] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:05.902 [2024-07-12 10:48:22.171093] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:05.902 [2024-07-12 10:48:22.171100] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:05.902 [2024-07-12 10:48:22.171106] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:05.902 [2024-07-12 10:48:22.171295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:05.902 [2024-07-12 10:48:22.171551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:05.902 [2024-07-12 10:48:22.171551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:05.902 10:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:05.902 10:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:10:05.902 10:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:05.902 10:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:05.902 10:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:05.902 10:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:05.902 10:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:05.902 10:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:06.163 [2024-07-12 10:48:22.993048] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:06.163 10:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:06.424 10:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:06.424 [2024-07-12 10:48:23.355997] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:06.424 10:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:06.685 10:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:06.945 Malloc0 00:10:06.945 10:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:06.945 Delay0 00:10:07.206 10:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.206 10:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:07.466 NULL1 00:10:07.466 10:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:07.727 10:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:07.727 10:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1963548 00:10:07.727 10:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:07.727 10:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.727 EAL: No free 2048 kB hugepages reported on node 1 00:10:08.668 Read completed with error (sct=0, sc=11) 00:10:08.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:08.668 10:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:08.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:08.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:08.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:08.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:08.929 10:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:08.929 10:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:09.189 true 00:10:09.189 10:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:09.189 10:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.130 10:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.130 10:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:10.130 10:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:10.130 true 00:10:10.130 10:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:10.130 10:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.402 10:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.705 10:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:10.705 10:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:10.705 true 00:10:10.705 10:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:10.705 10:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.087 10:48:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.087 10:48:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:12.087 10:48:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:12.347 true 00:10:12.347 10:48:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:12.347 10:48:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:13.286 10:48:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:13.286 10:48:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:13.286 10:48:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:13.547 true 00:10:13.547 10:48:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:13.547 10:48:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.547 10:48:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.808 10:48:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:13.808 10:48:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:14.067 true 00:10:14.067 10:48:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:14.067 10:48:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.067 10:48:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.327 10:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:14.327 10:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:14.587 true 00:10:14.587 10:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:14.587 10:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.587 10:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.847 10:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:14.847 10:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:14.847 true 00:10:15.107 10:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:15.107 10:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.107 10:48:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.367 10:48:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:15.367 10:48:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:15.367 true 00:10:15.627 10:48:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:15.627 10:48:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.627 10:48:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.888 10:48:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:15.888 10:48:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:15.888 true 00:10:15.888 10:48:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:15.888 10:48:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.147 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.407 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:16.407 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:16.407 true 00:10:16.407 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:16.407 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.667 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.928 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:16.928 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:16.928 true 00:10:16.928 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:16.928 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.188 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.448 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:17.448 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:17.448 true 00:10:17.448 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:17.448 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.389 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.650 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:18.650 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:18.650 true 00:10:18.650 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:18.650 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.910 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.910 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:18.910 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:19.171 true 00:10:19.171 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:19.171 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.440 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.440 [2024-07-12 10:48:36.367338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.440 [2024-07-12 10:48:36.367396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.440 [2024-07-12 10:48:36.367424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.440 [2024-07-12 10:48:36.367449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.440 [2024-07-12 10:48:36.367475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.440 [2024-07-12 10:48:36.367498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.367520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.367543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.367566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.367588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.367611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.367634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.367657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.367680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.367704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.367726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.367757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.367786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.367816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.367844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.367874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.367902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.367930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.367959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.367987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.368014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.368041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.368070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.368097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.368126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.368155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.368188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.368215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.368243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.368270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.368299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.368329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.368357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.368383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.368412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.368445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.368474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.368503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.368527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.368554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.368578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.368602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.368633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.368662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.368688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.368715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.368743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.368774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.368805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.368833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.368862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.368895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.368924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.368953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.369315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.369353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.369391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.369426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.369464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.369495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.369522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.369550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.369577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.369607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.369634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.369656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.369688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.369718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.369749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.369777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.369806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.369832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.369859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.441 [2024-07-12 10:48:36.369888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.369915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.369941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.369969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.369997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.370026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.370052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.370082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.370109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.370139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.370168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.370201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.370230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.370261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.370289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.370318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.370347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.370375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.370404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.370434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.370464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.370496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.370526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.370554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.370586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.370613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.370649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.370676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.370716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.370745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.370773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.370802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.370831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.370859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.370888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.370916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.370948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.370974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.371002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.371029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.371064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.371090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.371119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.371150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.371181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.371304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.371334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.371369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.371407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.371666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.371695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.371721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.371750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.371776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.371805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.371833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.371860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.371889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.371915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.371940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.371963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.371991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.372018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.372048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.372074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.372103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.372136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.372161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.372188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.372213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.372245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.372272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.372298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.372327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.372358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.442 [2024-07-12 10:48:36.372387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.372414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.372442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.372470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.372499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.372529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.372554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.372583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.372605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.372639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.372665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.372692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.372720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.372747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.372779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.372806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.372833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.372859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.372887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.372914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.372942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.372971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.372999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.373026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.373053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.373080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.373114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.373149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.373175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.373206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.373233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.373260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.373288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.373784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.373813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.373842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.373870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.373898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.373926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.373956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.373985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.374016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.374044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.374073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.374102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.374135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.374162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.374189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.374216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.374242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.374269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.374296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.374325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.374351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.374381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.374409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.374440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.374468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.374495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.374523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.374551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.374579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.374605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.374634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.374662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.374690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.374718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.374746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.374775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.374803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.374830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.374859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.374886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.374913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.374952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.374994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.375025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.375053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.375086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.375119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.375148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.375174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.375199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.375227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.443 [2024-07-12 10:48:36.375257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.375281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.375311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.375339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.375366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.375392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.375417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.375445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.375471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.375499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.375531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.375568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.375607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.375798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.375826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.375850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.375879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.376158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.376187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.376217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.376262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.376291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.376322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.376349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.376380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.376409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.376436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.376466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.376499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.376526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.376554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.376583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.376613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.376640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.376691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.376720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.376754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.376781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.376841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.376870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.376903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.376932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.376961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.376988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.377016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.377045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.377083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.377112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.377165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.377195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.377225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.377249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.377277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.377310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.377338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.377366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.377397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.377425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.377453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.377480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.377508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.377536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.377558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.377591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.377615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.377641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.377671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.377696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.377722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.377748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.377774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.377801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.377830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.377855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.377881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.444 [2024-07-12 10:48:36.377904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.378394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.378422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.378451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.378476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.378503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.378530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.378558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.378585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.378609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.378636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.378664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.378691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.378719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.378747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.378776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.378804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.378830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.378855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.378883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.378911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.378938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.378965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.378992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.379020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.379045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.379073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.379100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.379130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.379160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.379189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.379217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.379244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.379271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.379300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.379328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.379355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.379384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.379411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.379438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.379466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.379492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.379521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.379552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.379579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.379606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.379634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.379664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.379691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.379723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.379751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.379786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.379816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.379849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.379875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.379909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.379938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.379978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.380007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.380034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.380060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.380086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.380108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.380137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.380167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.380289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.380316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.380340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.380367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.380599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.380629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.380656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.380686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.380714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.380739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.380765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.380791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.380820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.380844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.380868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.380895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.380919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.445 [2024-07-12 10:48:36.380946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.380974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.381001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.381030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.381058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.381086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.381114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.381147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.381175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.381203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.381230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.381278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.381305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.381339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.381365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.381405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.381431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.381468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.381495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.381529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.381557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.381600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.381625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.381652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.381691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.381730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.381765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.381792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.381819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.381850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.381877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.381906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.381935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.381962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.381993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.382021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.382048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.382077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.382107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.382137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.382165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.382192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.382218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.382245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.382272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.382299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.382662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.382698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.382740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.382774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.382799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.382829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.382862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.382888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.382917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.382944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.382972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.382999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.383024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.446 [2024-07-12 10:48:36.383048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.383071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.383099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.383129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.383159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.383186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.383212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.383238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.383275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.383301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.383339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.383365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.383391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.383419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.383451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.383478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.383511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.383536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.383564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.383591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.383619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.383648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.383675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.383708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.383738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.383765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.383792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.383821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.383849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.383878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.383905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.383933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.383957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.383988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.384022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.384050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.384076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.384110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.384142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.384170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.384198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.384228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.384259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.384297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.384332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.384370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.384397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.384427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.384456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.384485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.384513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.384652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.384682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.384710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.384737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.384980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.385013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.385041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.385072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.385100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.385130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.385160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.385189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.385222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.385251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.385293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.385320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.385350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.385378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.385438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.385466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.385494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.385523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.385551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.385581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.385610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.385638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.385668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.385692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.385719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.447 [2024-07-12 10:48:36.385748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.385780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.385808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.385836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.385860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.385891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.385918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.385947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.385976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.386006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.386033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.386060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.386083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.386113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.386147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.386175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.386201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.386229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.386257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.386285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.386311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.386338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.386367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.386394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.386421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.386450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.386473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.386501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.386527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.386555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.386583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.386612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.386638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.386666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.387161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.387193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.387219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.387247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.387277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.387304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.387334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.387364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.387392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.387423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.387451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.387478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.387507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.387535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.387565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.387592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.387622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.387649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.387677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.387705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.387736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.387765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.387803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.387833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.387874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.387902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.387934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.387962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.387998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.388027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.388059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.388089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.388119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.388150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.388184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.388213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.388242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.388271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.388298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.388326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.388353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.388381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.388410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.388436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.388460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.388487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.448 [2024-07-12 10:48:36.388518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.388546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.388582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.388617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.388654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.388691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.388724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.388750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.388784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.388816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.388842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.388869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.388896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.388923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.388952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.388981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.389007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.389033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.389151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.389181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.389210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.389241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:19.449 [2024-07-12 10:48:36.389571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.389600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.389630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.389658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.389685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.389714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.389742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.389770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.389797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.389826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.389854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.389886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.389914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.389944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.389974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.390013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.390041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.390075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.390104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.390135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.390161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.390190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.390217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.390246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.390271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.390301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.390330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.390356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.390385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.390413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.390440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.390467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.390495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.390521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.390552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.390581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.390615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.390641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.390667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.390694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.390730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.390754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.390781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.390812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.390842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.390869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.390895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.390922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.390950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.390977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.391003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.391031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.391057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.391087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.449 [2024-07-12 10:48:36.391113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.391139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.391169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.391196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.391225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.391836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.391863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.391891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.391926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.391953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.391980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.392009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.392039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.392066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.392094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.392126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.392173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.392203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.392235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.392262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.392302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.392329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.392354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.392381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.392413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.392440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.392469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.392496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.392526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.392552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.392581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.392609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.392644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.392673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.392704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.392734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.392761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.392790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.392819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.392846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.392874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.392902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.392930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.392958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.392986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.393013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.393039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.393068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.393098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.393129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.393156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.393189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.393216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.393249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.393275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.393327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.393353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.393387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.393416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.393444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.393470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.393494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.393524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.393552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.393581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.393606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.393632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.393660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.393689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.393814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.393839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.393864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.393892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.394134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.394159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.394188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.394213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.394241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.394269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.450 [2024-07-12 10:48:36.394293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.394322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.394353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.394382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.394409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.394435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.394462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.394494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.394521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.394549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.394578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.394607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.394634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.394661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.394690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.394718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.394748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.394774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.394802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.394832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.394862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.394891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.394920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.394950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.394976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.395004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.395031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.395058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.395083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.395106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.395139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.395171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.395199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.395226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.395254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.395282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.395311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.395337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.395368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.395396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.395424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.395449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.395481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.395506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.395532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.395560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.395588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.395616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.395644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.395672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.395699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.395724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.395754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.396119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.396149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.396177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.396210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.396236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.396263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.396291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.396319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.396346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.396373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.396398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.396428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.396459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.396486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.396515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.396545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.396574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.396602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.396631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.396658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.396687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.396717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.396746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.396788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.396817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.396862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.451 [2024-07-12 10:48:36.396889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.396920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.396946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.396977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.397006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.397034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.397061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.397088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.397114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.397144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.397173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.397201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.397228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.397258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.397286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.397311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.397339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.397365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.397394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.397420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.397451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.397479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.397503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.397534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.397563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.397590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.397614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.397640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.397669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.397699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.397727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.397754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.397781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.397809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.397837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.397864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.397895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.397923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.398043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.398077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.398105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.398157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.398391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.398422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.398449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.398481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.398510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.398539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.398569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.398601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.398629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.398656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.398684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.398711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.398737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.398766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.398793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.398819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.398843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.398870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.398900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:19.452 [2024-07-12 10:48:36.398929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.398957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.398985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.399011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.452 [2024-07-12 10:48:36.399037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.399066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.399091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.399119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.399158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.399181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.399211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.399242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.399269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:19.453 [2024-07-12 10:48:36.399298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.399328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.399358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.399386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.399418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.399443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.399473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.399500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.399530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.399556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.399589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.399618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.399645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.399673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.399700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.399726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.399755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.399782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.399811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.399839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.399868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.399908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.399941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.399968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.399993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.400025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.400055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.400505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.400538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.400566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.400608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.400637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.400672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.400698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.400734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.400762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.400795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.400822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.400853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.400880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.400913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.400940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.400973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.401001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.401031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.401060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.401090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.401125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.401152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.401180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.401207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.401235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.401263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.401290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.401320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.401349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.401384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.401411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.401445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.401474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.401503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.401531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.401557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.401582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.401607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.401640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.401670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.401708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.401746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.401782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.401812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.401838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.401865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.401891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.401918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.401946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.401973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.401998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.453 [2024-07-12 10:48:36.402027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.402058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.402085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.402115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.402146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.402172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.402201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.402233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.402261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.402290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.402318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.402347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.402376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.402539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.402567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.402630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.402660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.402906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.402936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.402964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.402992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.403020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.403048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.403075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.403103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.403133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.403162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.403190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.403235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.403266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.403296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.403323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.403356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.403384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.403412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.403444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.403472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.403500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.403528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.403576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.403606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.403636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.403663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.403692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.403721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.403751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.403782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.403811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.403840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.403873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.403904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.403931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.403959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.403997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.404025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.404055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.404088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.404114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.404148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.404175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.404204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.404243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.404272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.404301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.404332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.404360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.404389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.404414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.404444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.404472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.404501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.404528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.404555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.404594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.404623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.404652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.405213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.405245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.405275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.405302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.405330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.454 [2024-07-12 10:48:36.405358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.405389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.405420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.405451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.405482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.405513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.405543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.405574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.405602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.405630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.405659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.405689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.405720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.405759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.405788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.405824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.405853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.405884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.405917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.405948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.405979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.406011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.406039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.406065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.406094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.406127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.406161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.406193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.406220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.406249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.406290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.406320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.406346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.406374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.406405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.406435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.406471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.406499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.406530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.406562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.406589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.406614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.406645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.406678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.406703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.406727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.406751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.406780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.406813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.406845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.406875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.406906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.406940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.406970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.407001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.407031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.407062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.407093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.407126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.407261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.407293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.407325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.407354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.407599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.407639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.407666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.407697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.407731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.407760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.407800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.407831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.407862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.407890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.407922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.407976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.455 [2024-07-12 10:48:36.408004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.408033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.408062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.408091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.408133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.408163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.408197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.408226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.408255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.408290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.408320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.408353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.408383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.408410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.408440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.408471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.408508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.408537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.408567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.408596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.408626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.408659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.408687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.408734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.408763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.408790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.408818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.408846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.408876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.408903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.408933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.408975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.409000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.409032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.409064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.409092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.409120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.409153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.409184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.409213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.409251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.409281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.409309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.409336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.409365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.409389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.409420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.409763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.409795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.409822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.409852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.409881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.409915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.409948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.409977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.410008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.410040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.410070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.410102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.410133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.410162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.410190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.410221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.410255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.410284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.410320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.410350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.410379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.410407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.410436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.410471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.410500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.410534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.410564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.410594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.410630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.410657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.410693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.456 [2024-07-12 10:48:36.410722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.410752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.410781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.410809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.410839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.410863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.410893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.410924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.410954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.410988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.411023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.411050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.411079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.411106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.411137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.411164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.411196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.411228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.411257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.411284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.411308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.411341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.411371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.411397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.411426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.411458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.411483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.411507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.411531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.411556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.411588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.411621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.411649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.411790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.411818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.411847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.411878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.412136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.412167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.412196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.412225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.412255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.412281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.412311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.412337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.412366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.412396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.412426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.412456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.412486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.412517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.412545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.412575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.412605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.412638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.412668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.412700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.412728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.412759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.412811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.412839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.412870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.457 [2024-07-12 10:48:36.412901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.412931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.412962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.412991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.413021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.413050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.413080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.413110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.413146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.413179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.413208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.413236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.413285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.413313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.413343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.413372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.413399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.413428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.413458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.413487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.413515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.413543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.413574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.413602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.413632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.413661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.413687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.413716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.413744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.413773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.413801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.413828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.413861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.413895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.414239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.414268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.414294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.414324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.414358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.414383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.414414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.414444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.414473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.414502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.414534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.414562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.414592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.414622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.414651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.414683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.414714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.414749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.414779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.414807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.414840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.414870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.414906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.414934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.414961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.414994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.415024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.415077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.415108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.415144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.415170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.415199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.415230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.415259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.415293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.415321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.415352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.415380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.415407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.415437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.415467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.415496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.415524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.415556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.415592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.415619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.415647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.415676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.415704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.415737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.415768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.415802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.458 [2024-07-12 10:48:36.415832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.459 [2024-07-12 10:48:36.415859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.459 [2024-07-12 10:48:36.415893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.459 [2024-07-12 10:48:36.415922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.459 [2024-07-12 10:48:36.415952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.459 [2024-07-12 10:48:36.415979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.459 [2024-07-12 10:48:36.416006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.459 [2024-07-12 10:48:36.416034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.459 [2024-07-12 10:48:36.416061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.459 [2024-07-12 10:48:36.416087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.459 [2024-07-12 10:48:36.416117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.459 [2024-07-12 10:48:36.416150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.459 [2024-07-12 10:48:36.416296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.459 [2024-07-12 10:48:36.416325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.459 [2024-07-12 10:48:36.416358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.459 [2024-07-12 10:48:36.416387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.416631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.416662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.416690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.416718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.416750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.416778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.416804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.416836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.416870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.416897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.416925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.416951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.416980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.417008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.417032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.417065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.417095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.417128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.417155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.417180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.417205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.417229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.417252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.417277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.417302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.417326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.417350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.417375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.417407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.417437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.417467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.417496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.417529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.417559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.417590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.417623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.417652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.417678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.417706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.417736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.417770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.417801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.417834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.417864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.417893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.417920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.417947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.417976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.418002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.418032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.418069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.418102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.418133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.418164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.418193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.418224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.418254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.418282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.418314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.418669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.418703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.418733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.418762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.418796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.418823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.418857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.749 [2024-07-12 10:48:36.418887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.418915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.418945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.418972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.419005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.419033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.419069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.419107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.419139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.419169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.419199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.419228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.419257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.419284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.419320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.419349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.419377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.419406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.419434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.419463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.419491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.419519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.419550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.419578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.419607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.419638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.419671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.419701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.419729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.419766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.419795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.419823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.419851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.419882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.419940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.419969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.419998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.420028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.420057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.420086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.420117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.420149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.420178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.420207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.420247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.420274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.420303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.420331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.420358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.420384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.420415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.420443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.420471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.420503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.420533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.420560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.420590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.420716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.420741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.420770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.420806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.421046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.421074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.421118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.421157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.421188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.421213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.421241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.421269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.421299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.421329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.421360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.421387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.421421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.421448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.421477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.421505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.421531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.421560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.421590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.421625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.421656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.421686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.750 [2024-07-12 10:48:36.421714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.421743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.421777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.421807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.421838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.421867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.421897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.421921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.421952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.421982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.422010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.422040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.422079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.422114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.422147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.422176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.422202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.422233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.422265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.422295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.422319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.422343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.422368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.422394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.422418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.422446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.422480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.422508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.422535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.422559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.422583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.422606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.422631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.422657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.422687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.422716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.422745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.423222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.423254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.423284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.423312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.423340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.423369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.423396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.423426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.423486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.423514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.423543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.423579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.423606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.423636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.423664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.423696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.423727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.423758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.423793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.423823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.423851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.423882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.423910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.423941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.423972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.424002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.424031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.424062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.424093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.424136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.424165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.424195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.424226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.424253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.424283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.424315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.424343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.424380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.424408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.424443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.424472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.424502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.424534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.751 [2024-07-12 10:48:36.424562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.424596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.424624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.424653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.424680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.424709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.424740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.424769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.424800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.424837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.424868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.424896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.424924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.424952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.424983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.425017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.425045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.425077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.425109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.425142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.425170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.425292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.425326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.425361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.425394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:19.752 [2024-07-12 10:48:36.425633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.425671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.425700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.425724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.425754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.425781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.425809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.425839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.425867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.425896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.425926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.425958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.425987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.426018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.426046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.426077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.426107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.426139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.426166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.426195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.426223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.426256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.426286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.426313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.426347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.426378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.426408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.426435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.426463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.426494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.426524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.426554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.426585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.426612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.426646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.426677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.426706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.426737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.426767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.426795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.426819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.426849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.426882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.426913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.426950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.426979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.427007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.427035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.427076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.427105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.427136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.427167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.427192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.427223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.752 [2024-07-12 10:48:36.427252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.427280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.427310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.427338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.427365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.427777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.427807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.427838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.427865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.427894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.427922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.427954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.427982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.428035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.428065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.428094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.428125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.428152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.428186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.428214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.428257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.428284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.428314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.428345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.428373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.428405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.428434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.428466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.428496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.428525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.428564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.428594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.428624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.428657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.428685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.428738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.428769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.428798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.428831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.428867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.428895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.428925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.428954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.428980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.429013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.429042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.429082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.429118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.429158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.429194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.429223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.429253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.429279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.429308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.429337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.429376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.429406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.429435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.429464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.429496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.429528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.429557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.429585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.429616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.429644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.429674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.429702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.429735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.429764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.429891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.429925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.429954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.429983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.430231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.430263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.430296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.430329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.430358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.430390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.430419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.430448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.430497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.430527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.430558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.430587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.430614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.430644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.430678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.430709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.430737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.753 [2024-07-12 10:48:36.430766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.754 [2024-07-12 10:48:36.430796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.754 [2024-07-12 10:48:36.430822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.754 [2024-07-12 10:48:36.430852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.754 [2024-07-12 10:48:36.430881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.754 [2024-07-12 10:48:36.430908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.754 [2024-07-12 10:48:36.430940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.754 [2024-07-12 10:48:36.430977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.754 [2024-07-12 10:48:36.431010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.754 [2024-07-12 10:48:36.431039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.754 [2024-07-12 10:48:36.431071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.754 [2024-07-12 10:48:36.431099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.754 [2024-07-12 10:48:36.431131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.754 [2024-07-12 10:48:36.431162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.754 [2024-07-12 10:48:36.431192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.754 [2024-07-12 10:48:36.431222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.754 [2024-07-12 10:48:36.431256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.754 [2024-07-12 10:48:36.431287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.754 [2024-07-12 10:48:36.431316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.754 [2024-07-12 10:48:36.431344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.754 [2024-07-12 10:48:36.431373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.754 [2024-07-12 10:48:36.431401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.754 [2024-07-12 10:48:36.431430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.754 [2024-07-12 10:48:36.431462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.754 [2024-07-12 10:48:36.431490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.754 [2024-07-12 10:48:36.431521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.754 [2024-07-12 10:48:36.431548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.754 [2024-07-12 10:48:36.431576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.754 [2024-07-12 10:48:36.431609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.754 [2024-07-12 10:48:36.431637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.754 [2024-07-12 10:48:36.431671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.754 [2024-07-12 10:48:36.431702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.754 [2024-07-12 10:48:36.431731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.431758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.431787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.431819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.431846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.431874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.431900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.431932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.431964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.431992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.432385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.432417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.432447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.432475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.432506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.432534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.432564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.432595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.432623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.432651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.432675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.432703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.432733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.432762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.432793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.432832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.432862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.432891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.432919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.432949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.432979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.433009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.433041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.433068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.433097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.433128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.433158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.433188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.433216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.433252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.433280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.433306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.433332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.433364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.433395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.433424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.433453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.433483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.433515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.433543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.433573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.433606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.433635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.433665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.433695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.433723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.433753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.433781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.433809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.433839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.433868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.433898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.433927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.433955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.433985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.434032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.434060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.755 [2024-07-12 10:48:36.434088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.434126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.434156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.434186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.434215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.434244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.434280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.434409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.434440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.434467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.434495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.434882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.434913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.434941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.434974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.435002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.435053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.435083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.435113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.435145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.435175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.435206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.435233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.435262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.435291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.435321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.435351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.435379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.435407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.435435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.435464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.435492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.435526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.435555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.435586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.435634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.435664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.435694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.435724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.435754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.435784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.435815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.435843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.435873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.435903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.435935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.435962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.435993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.436024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.436054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.436084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.436113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.436146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.436173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.436203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.436235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.436270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.436298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.436325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.436352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.436380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.436408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.436439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.436473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.436503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.436530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.436558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.436592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.436628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.436657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.436687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.437025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.437052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.437082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.437108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.437135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.437165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.437197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.437227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.437258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.437290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.437317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.437346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.756 [2024-07-12 10:48:36.437379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.437407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.437436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.437465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.437518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.437550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.437579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.437608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.437636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.437667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.437697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.437728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.437757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.437786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.437813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.437849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.437873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.437904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.437936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.437971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.438007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.438038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.438064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.438094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.438125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.438154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.438183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.438210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.438241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.438271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.438297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.438326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.438350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.438374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.438398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.438427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.438457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.438487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.438515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.438547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.438577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.438606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.438636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.438662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.438689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.438722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.438752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.438779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.438809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.438836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.438890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.438917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.439044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.439083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.439112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.439145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.439611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.439658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.439686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.439714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.439743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.439771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.439801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.439831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.439865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.439892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.439921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.439951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.439981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.440022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.440049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.440079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.440111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.440143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.440175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.440204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.440232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.440262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.440291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.440320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.440348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.440377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.440407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.440438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.440495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.440523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.440552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.440579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.440609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.440664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.440693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.757 [2024-07-12 10:48:36.440722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.440757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.440787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.440829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.440858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.440887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.440917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.440946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.440974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.441004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.441061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.441090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.441126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.441155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.441188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.441217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.441247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.441275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.441303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.441333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.441361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.441408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.441436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.441464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.441501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.441861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.441892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.441921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.441955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.441984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.442012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.442038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.442068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.442092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.442125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.442165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.442195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.442224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.442252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.442279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.442312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.442345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.442373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.442398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.442430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.442466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.442497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.442526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.442555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.442587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.442618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.442648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.442677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.442704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.442732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.442759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.442788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.442820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.442850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.442878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.442908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.442937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.442969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.442999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.443030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.443064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.443094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.443141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.443170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.443198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.443231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.443260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.443303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.443333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.443358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.443391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.443420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.443449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.443478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.443506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.443535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.443563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.443589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.443620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.758 [2024-07-12 10:48:36.443649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.443677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.443707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.443731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.443755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.443913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.443942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.443966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.443991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.444600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.444634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.444665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.444698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.444726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.444753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.444782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.444811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.444842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.444872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.444899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.444930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.444958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.444988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.445017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.445064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.445095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.445126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.445156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.445184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.445218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.445245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.445302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.445330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.445359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.445387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.445416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.445445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.445474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.445506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.445537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.445565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.445627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.445655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.445694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.445728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.445757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.445787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.445815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.445843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.445868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.445899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.445929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.445955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.445983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.446009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.446038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.446077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.446110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.446143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.446176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.446205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.446230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.446260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.446289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.446320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.446348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.446377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.446410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.446554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.446583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.446614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.446644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.446672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.446700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.446730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.446790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.446818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.446846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.446878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.446907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.446938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.446967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.446997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.447030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.447058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.447119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.447154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.447184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.447213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.447240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.447275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.447302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.447344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.759 [2024-07-12 10:48:36.447373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.447402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.447443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.447473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.447501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.447528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.447553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.447581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.447610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.447637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.447661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.447696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.447729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.447761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.447792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.447819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.447846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.447874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.447914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.447941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.447970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.447995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.448022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.448053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.448081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.448110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.448145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.448177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.448205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.448237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.448265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.448300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.448327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.448356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.448391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.448419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.448448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.448476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.448517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.448648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.448683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.448717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.448744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.448996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.449026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.449055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.449083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.449113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.449145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.449169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.449198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.449222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.449246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.449270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.449294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.449317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.449347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.449379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.449407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.449438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.449470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.449501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.449534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.449562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.760 [2024-07-12 10:48:36.449586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.449617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.449641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.449669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.449700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.449730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.449757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.449784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.449811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.449841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.449869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.449899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.449935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.449964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.449991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.450019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.450049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.450088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.450114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.450143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.450167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.450192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.450217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.450242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.450266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.450291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.450316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.450341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.450373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.450405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.450435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.450466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.450507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.450535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.450562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.450589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.450621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.450656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.451008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.451037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.451071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.451098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.451130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.451160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.451188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.451220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.451251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.451281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.451312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.451340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.451372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.451402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.451430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.451464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.451490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.451519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.451546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.451575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.451608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.451637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.451669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.451698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.451728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.451755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.451785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.451813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.451842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.451870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.451897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.451934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.451963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.451991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.452016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.452047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.452076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.761 [2024-07-12 10:48:36.452105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.452137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.452165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.452206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.452239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.452267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.452299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.452328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.452355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.452382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.452411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.452440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.452469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.452506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.452535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.452566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.452595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.452623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.452652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.452680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.452738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.452767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.452799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.452828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.452857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.452899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.452929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.453057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.453089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.453118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.453154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.453407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.453436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.453465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.453494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.453522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.453553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.453579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.453606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.453632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.453660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.453686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.453712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.453740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.453768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.453797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.453826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.453860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.453898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.453933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.453963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.453993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.454020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.454049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.454073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.454102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.454134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.454163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.454194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.454228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.454259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.454291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.454317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.454346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.454378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.454407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.454438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.454467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.454499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.454530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.454558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.454587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.454616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.454647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.454678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.454707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.454736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.454763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.454789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.454813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.762 [2024-07-12 10:48:36.454845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.454878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.454904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.454933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.454962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.454989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.455018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.455048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.455081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.455111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.455465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.455512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.455539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.455569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.455603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.455631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.455666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.455694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.455723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.455754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.455784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.455816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.455844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.455870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.455900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.455928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.455966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.456001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.456028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.456057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.456085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.456114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.456141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.456173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.456202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.456231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.456258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.456290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.456322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.456350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.456382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.456409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.456439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.456468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.456500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.456531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.456559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.456595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.456625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.456655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.456685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.456715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.456748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.456778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.456809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.456840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.456872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.456901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.456934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.456964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.456997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.457027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.457058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.457087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.457116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.457166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.457196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.457232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.457259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.457288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.457333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.457364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.457392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.457419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.457542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.457604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.457630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.457660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.457914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.457942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.457987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.458014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.458043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.458073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.763 [2024-07-12 10:48:36.458112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.458145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.458173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.458217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.458246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.458270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.458302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.458334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.458361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.458389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.458418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.458448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.458484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.458512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.458539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.458567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.458593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.458621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.458649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.458679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.458705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.458736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.458764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.458795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.458820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.458851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.458881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.458910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.458940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.458968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.458998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.459030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.459060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.459088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.459118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.459152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.459185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.459225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.459254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.459288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.459318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.459346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.459381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.459410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.459439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.459470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.459497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.459526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.459556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.459584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.459608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.459639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.459671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.460021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.460046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.460069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.460094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.460118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.460147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.460171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.460195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.460219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.460242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.460273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.460304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.460335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.460365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.460397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.460427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.460460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.460490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.460520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.460550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.460579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.460607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.460635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.460665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.460696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.460724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.460753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.460783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.460814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.460843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.460874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 [2024-07-12 10:48:36.460902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.764 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:19.765 [2024-07-12 10:48:36.460933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.460961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.460990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.461023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.461054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.461079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.461113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.461145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.461176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.461205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.461232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.461266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.461298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.461326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.461362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.461391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.461422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.461452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.461480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.461529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.461559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.461588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.461619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.461647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.461703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.461733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.461766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.461793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.461823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.461851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.461881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.461911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.462042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.462072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.462105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.462138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.462380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.462415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.462443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.462474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.462501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.462532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.462563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.462592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.462629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.462662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.462690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.462715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.462744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.462770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.462800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.462828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.462858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.462884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.462915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.462948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.462974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.463002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.463031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.463059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.463095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.765 [2024-07-12 10:48:36.463120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.463156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.463184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.463215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.463243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.463272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.463301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.463330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.463392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.463423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.463456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.463486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.463514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.463548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.463577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.463610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.463637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.463668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.463703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.463733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.463763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.463793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.463821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.463856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.463886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.463917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.463947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.463976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.464007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.464036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.464081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.464113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.464144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.464175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.464511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.464537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.464568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.464596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.464625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.464658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.464686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.464717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.464745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.464774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.464801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.464830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.464858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.464889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.464917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.464948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.464976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.465004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.465035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.465073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.465109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.465137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.465167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.465197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.465226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.465253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.465280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.465307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.465332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.465358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.465384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.465415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.465443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.465472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.465501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.465536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.465568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.465598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.465626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.465652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.465680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.465710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.465738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.465766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.465795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.465827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.465854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.465884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.465917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.465946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.465998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.466026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.766 [2024-07-12 10:48:36.466060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.466096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.466126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.466158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.466188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.466215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.466244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.466272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.466301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.466331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.466360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.466390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.466519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.466548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.466579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.466611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.466853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.466889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.466919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.466947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.466980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.467008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.467043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.467072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.467100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.467134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.467163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.467195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.467224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.467253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.467291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.467320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.467348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.467379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.467408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.467442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.467470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.467499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.467530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.467559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.467620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.467648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.467677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.467708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.467736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.467766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.467796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.467825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.467851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.467879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.467909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.467940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.467972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.468000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.468027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.468055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.468084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.468112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.468142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.468171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.468203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.468239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.468270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.468300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.468327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.468355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.468384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.468411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.468440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.468467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.468497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.468538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.468575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.468610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.468642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.468989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.469018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.469042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.469075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.469105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.469140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.767 [2024-07-12 10:48:36.469171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.469200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.469230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.469261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.469289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.469316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.469344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.469374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.469403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.469457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.469486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.469518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.469553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.469581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.469632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.469662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.469693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.469725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.469755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.469782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.469806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.469835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.469864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.469893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.469922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.469951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.469980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.470015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.470042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.470072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.470105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.470134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.470163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.470193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.470221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.470245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.470273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.470296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.470326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.470355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.470386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.470415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.470449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.470481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.470510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.470541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.470569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.470598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.470630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.470661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.470696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.470725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.470753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.470783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.470813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.470845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.470884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.470919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.471037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.471069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.471096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.471127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.471776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.471806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.471836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.471865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.471897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.471958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.471986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.472017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.472047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.472075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.472106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.472135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.472178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.472208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.472240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.472270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.472298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.472328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.472358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.768 [2024-07-12 10:48:36.472387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.472417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.472448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.472485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.472514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.472543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.472572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.472604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.472645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.472674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.472702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.472729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.472757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.472791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.472819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.472850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.472878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.472906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.472937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.472967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.473000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.473030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.473060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.473087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.473116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.473148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.473177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.473204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.473231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.473263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.473291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.473319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.473352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.473382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.473409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.473448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.473477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.473505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.473532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.473566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.473605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.473642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.473671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.473698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.473726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.473925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.473954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.473997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.474061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.474092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.474124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.474151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.474182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.474212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.474241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.474271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.474300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.474598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.474629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.474657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.474688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.474718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.474748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.474777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.474806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.474836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.474866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.474896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.474927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.474958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.474982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.475010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.475043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.475074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.475104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.769 [2024-07-12 10:48:36.475134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.475162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.475202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.475236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.475264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.475291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.475321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.475345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.475377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.475407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.475438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.475469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.475494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.475518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.475547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.475578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.475607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.475637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.475670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.475699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.475727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.475758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.475782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.475827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.475856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.475887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.475924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.475954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.475984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.476014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.476049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.476076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.476105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.476140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.476169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.476201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.476234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.476260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.476290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.476318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.476347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.476378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.476405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.476436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.476467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.476497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.476629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.476658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.476686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.476716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.476806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.476835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.476867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.476896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.476954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.476983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.477016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.477046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.477075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.477105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.477138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.477168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.477203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.477232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.477263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.477294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.477322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.477350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.477379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.477409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.477438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.477467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.477496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.477525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.477559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.477590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.770 [2024-07-12 10:48:36.477619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.477648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.477675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.477705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.477735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.477766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.477796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.477824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.477859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.477891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.477943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.477973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.478005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.478037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.478065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.478100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.478133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.478161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.478187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.478216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.478249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.478284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.478312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.478339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.478369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.478399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.478428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.478457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.478487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.478515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.478546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.478582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.478610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.478951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.478984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.479024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.479048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.479077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.479108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.479144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.479177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.479206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.479236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.479266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.479296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.479326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.479359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.479392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.479422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.479450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.479478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.479508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.479535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.479566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.479594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.479624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.479666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.479696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.479725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.479751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.479782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.479843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.479873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.479902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.771 [2024-07-12 10:48:36.479931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.479963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.480011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.480040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.480066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.480095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.480127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.480158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.480199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.480227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.480255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.480283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.480309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.480335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.480364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.480388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.480420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.480452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.480481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.480509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.480539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.480565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.480594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.480625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.480655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.480684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.480713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.480743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.480774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.480808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.480838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.480871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.480902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.481033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.481092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.481125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.481155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.481397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.481431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.481459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.481485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.481514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.481542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.481572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.481599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.481632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.481662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.481692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.481727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.481759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.481794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.481823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.481852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.481898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.481930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.481957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.481986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.482014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.482068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.482097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.482136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.482166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.482203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.482235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.482262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.482294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.482323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.482361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.482391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.482420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.482451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.482481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.482511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.482538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.482568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.482598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.482627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.482660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.482689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.482722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.772 [2024-07-12 10:48:36.482752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.482782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.482821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.482850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.482881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.482914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.482944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.482984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.483014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.483044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.483075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.483105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.483135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.483166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.483193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.483227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.483585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.483617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.483645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.483686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.483714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.483738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.483769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.483797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.483825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.483854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.483881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.483909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.483942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.483977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.484005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.484031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.484063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.484091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.484127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.484161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.484189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.484220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.484250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.484277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.484307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.484336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.484363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.484393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.484421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.484451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.484480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.484508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.484542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.484572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.484603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.484633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.484663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.484697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.484727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.484760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.484788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.484823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.484850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.484881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.484923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.484950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.484979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.485007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.485038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.485062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.485090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.485125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.485154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.485181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.485210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.485238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.485275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.485314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.485342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.485372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.485401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.485426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.485453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.485484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.773 [2024-07-12 10:48:36.485608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.485639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.485666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.485691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.486136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.486169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.486199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.486229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.486256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.486285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.486321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.486351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.486387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.486417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.486447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.486479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.486507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.486543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.486572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.486602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.486633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.486660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.486721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.486751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.486781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.486809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.486837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.486867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.486895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.486926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.486957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.486985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.487015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.487046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.487081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.487110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.487141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.487169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.487197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.487231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.487259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.487288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.487316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.487344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.487378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.487407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.487434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.487464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.487499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.487534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.487575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.487610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.487636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.487667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.487696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.487726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.487753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.487781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.487809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.487843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.487879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.487908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.487937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.488400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.488432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.488462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.488491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.488519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.488548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.488575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.488631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.488660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.488689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.488717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.488744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.488775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.488806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.488837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.488867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.488895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.488930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.488957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.488994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.774 [2024-07-12 10:48:36.489023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.489053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.489083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.489115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.489165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.489195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.489223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.489255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.489284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.489315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.489345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.489373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.489407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.489435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.489466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.489494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.489523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.489557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.489587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.489621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.489649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.489678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.489709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.489736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.489766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.489807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.489835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.489860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.489890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.489918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.489946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.489975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.490004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.490035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.490064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.490095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.490125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.490162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.490197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.490225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.490254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.490283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.490311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.490339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.490504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.490531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.490561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.490590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.490941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.490973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.491003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.491033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.491066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.491095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.491127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.491157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.491192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.491221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.491252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.491294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.491323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.491353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.491381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.491409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.491434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.491465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.491496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.491527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.491558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.491598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.491628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.491657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.491686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.491712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.491742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.491771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.491799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.491826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.491857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.491886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.491914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.491938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.491967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.775 [2024-07-12 10:48:36.491995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.492025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.492055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.492083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.492113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.492145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.492173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.492228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.492257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.492288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.492315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.492345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.492372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.492403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.492449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.492478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.492507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.492536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.492568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.492597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.492625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.492660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.492696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.492723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.493214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.493245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.493275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.493305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.493335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.493366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.493410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.493440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.493469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.493498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.493528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.493562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.493594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.493622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.493652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.493683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.493713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.493746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.493775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.493805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.493833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.493862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.493918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.493947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.493977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.494006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.494036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.494066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.494097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.494130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.494161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.494190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.494219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.494249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.494276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.494305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.494338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.494367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.494396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.494425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.494454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.494492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.494522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.494551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.494586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.776 [2024-07-12 10:48:36.494618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.494648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.494679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.494708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.494738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.494775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.494806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.494836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.494864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.494893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.494920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.494953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.494979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.495011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.495042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.495073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.495101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.495130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.495157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.495292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.495319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.495348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.495379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.495622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.495653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.495682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.495720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.495752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.495782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.495813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.495844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.495875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.495904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.495935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.495964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.495993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.496023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.496052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.496082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.496111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.496141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.496177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.496205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.496234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.496266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.496295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.496330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.496363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.496393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.496423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.496456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.496483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.496510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.496538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.496572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.496600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.496625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.496655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.496687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.496719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.496750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.496776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.496805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.496834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.496873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.496898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.496930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.496960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.496995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.497023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.497053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.497077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.497105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.497135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.497161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.497185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.497210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.497234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.497259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.497284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.497310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.497340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.497782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.777 [2024-07-12 10:48:36.497811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.497842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.497870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.497899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.497927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.497960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.497989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:19.778 [2024-07-12 10:48:36.498018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.498055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.498084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.498113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.498143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.498179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.498210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.498239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.498266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.498295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.498323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.498352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.498383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.498419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.498447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.498482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.498510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.498539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.498571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.498601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.498632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.498664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.498693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.498747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.498778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.498805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.498834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.498863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.498901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.498929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.498959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.498991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.499023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.499053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.499083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.499112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.499160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.499189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.499219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.499248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.499279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.499332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.499363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.499393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.499424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.499453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.499485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.499514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.499544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.499589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.499619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.499648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.499677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.499705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.499734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.499775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.499900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.499934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.499962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.499989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.500243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.500273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.500299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.500328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.500360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.500387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.500416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.500443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.500476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.500506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.500535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.500563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.500592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.500621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.500653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.778 [2024-07-12 10:48:36.500682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.500712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.500743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.500776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.500806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.500835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.500865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.500926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.500957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.500985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.501045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.501076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.501105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.501144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.501175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.501203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.501240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.501269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.501301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.501329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.501357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.501389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.501417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.501462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.501493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.501522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.501557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.501586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.501645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.501674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.501702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.501738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.501767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.501797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.501830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.501859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.501885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.501914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.501954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.501983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.502011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.502041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.502067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.502106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.502630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.502664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.502692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.502722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.502749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.502779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.502809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.502840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.502868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.502902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.502931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.502964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.502993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.503023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.503060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.503089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.503118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.503151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.503182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.503213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.503244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.503292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.503319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.503352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.503382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.503411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.503458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.503488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.503518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.503551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.503580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.503612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.503641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.503670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.779 [2024-07-12 10:48:36.503700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.503728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.503759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.503791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.503820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.503851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.503881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.503912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.503950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.503980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.504007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.504033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.504070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.504095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.504125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.504153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.504183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.504211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.504245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.504273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.504301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.504333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.504363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.504390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.504422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.504452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.504481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.504511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.504541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.504573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.504708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.504738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.504769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.504799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.505039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.505071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.505101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.505137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.505168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.505200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.505229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.505260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.505300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.505330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.505358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.505390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.505418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.505446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.505476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.505507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.505542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.505571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.505601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.505631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.505659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.505688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.505716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.505744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.505789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.505821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.505856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.505881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.505913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.505943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.505978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.506007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.506038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.506069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.506110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.506143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.506171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.506202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.506229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.506259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.506289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.506320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.506351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.506379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.506407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.506438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.506496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.506527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.506556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.506586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.780 [2024-07-12 10:48:36.506616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.506672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.506703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.506732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.506763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.506790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.506822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.506853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.506880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.507251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.507285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.507315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.507347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.507377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.507406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.507441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.507470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.507496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.507527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.507555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.507590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.507619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.507648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.507687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.507715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.507745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.507774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.507816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.507845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.507873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.507901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.507932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.507959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.507989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.508017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.508049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.508079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.508109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.508145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.508175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.508205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.508234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.508263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.508294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.508323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.508351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.508383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.508419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.508450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.508478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.508510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.508540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.508571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.508601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.508632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.508686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.508715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.508744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.508779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.508809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.508836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.508879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.508908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.508939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.508967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.509003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.509031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.509064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.509092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.509120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.509153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.509185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.509213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.509349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.509380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.509412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.509442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.509689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.509750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.509779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.509807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.509836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.509867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.781 [2024-07-12 10:48:36.509897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.509925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.509954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.509993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.510023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.510053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.510082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.510112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.510146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.510177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.510210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.510240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.510269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.510304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.510335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.510367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.510397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.510427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.510477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.510506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.510538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.510568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.510596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.510628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.510656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.510694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.510718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.510749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.510780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.510820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.510847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.510876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.510907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.510943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.510972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.511002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.511033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.511062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.511096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.511130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.511160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.511190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.511220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.511249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.511279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.511308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.511339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.511370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.511410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.511440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.511468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.511503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.511532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.511887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.511920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.511948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.511977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.782 [2024-07-12 10:48:36.512011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.512039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.512064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.512096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.512126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.512155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.512187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.512213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.512245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.512275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.512305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.512335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.512366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.512395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.512429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.512464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.512492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.512523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.512567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.512597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.512624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.512654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.512685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.512721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.512749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.512778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.512826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.512861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.512891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.512916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.512947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.512980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.513009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.513036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.513073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.513105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.513137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.513170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.513203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.513233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.513265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.513294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.513330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.513359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.513388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.513418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.513447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.513481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.513512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.513541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.513573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.513602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.513652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.513682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.513711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.513741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.513771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.513800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.513828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.513858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.513980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.514006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.514035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.514066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.514334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.514368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.514398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.514425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.514452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.514488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.514517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.514546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.514573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.514601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.514629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.514658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.514686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.514716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.514750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.514779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.783 [2024-07-12 10:48:36.514813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.514841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.514872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.514902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.514932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.514959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.514988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.515018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.515046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.515075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.515104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.515135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.515169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.515200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.515228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.515260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.515290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.515320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.515355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.515385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.515413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.515440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.515468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.515503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.515534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.515558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.515587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.515614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.515644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.515672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.515703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.515732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.515771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.515804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.515833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.515859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.515888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.515920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.515949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.515978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.516009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.516040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.516070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.516443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.516473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.516534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.516564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.516594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.516628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.516658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.516692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.516723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.516752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.516785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.516815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.516844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.516874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.516904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.516935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.516965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.516995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.517025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.517055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.517118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.517152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.517182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.517214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.517245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.517277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.517308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.517338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.517366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.517393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.517426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.517458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.517484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.517522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.517549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.784 [2024-07-12 10:48:36.517574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.517604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.517631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.517666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.517694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.517724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.517748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.517780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.517812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.517842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.517869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.517905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.517937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.517966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.517995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.518025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.518054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.518087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.518115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.518146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.518177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.518206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.518235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.518267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.518297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.518325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.518356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.518386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.518413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.518547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.518590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.518617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.518646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.518894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.518925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.518956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.518991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.519022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.519065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.519093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.519127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.519155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.519185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.519216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.519257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.519288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.519318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.519345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.519376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.519408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.519438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.519467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.519502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.519533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.519562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.519588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.519619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.519647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.519682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.519712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.519740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.519773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.519803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.519833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.519863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.519890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.519919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.519944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.519974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.520003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.520033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.520064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.520094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.520127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.520157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.520189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.520236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.520265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.520293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.520349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.520379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.520408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.785 [2024-07-12 10:48:36.520447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.520474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.520509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.520540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.520569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.520598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.520628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.520659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.520687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.520717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.521076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.521105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.521139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.521168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.521198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.521229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.521254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.521286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.521317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.521348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.521378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.521407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.521432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.521466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.521497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.521527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.521555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.521582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.521612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.521643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.521670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.521698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.521732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.521759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.521788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.521819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.521849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.521879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.521907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.521935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.521962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.521990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.522021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.522049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.522077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.522107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.522138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.522164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.522193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.522224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.522256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.522287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.522315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.522344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.522374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.522402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.522435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.522465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.522493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.522522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.522552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.522583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.522614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.522644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.522674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.522705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.522735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.522764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.522796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.522826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.522858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.522893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.522924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.522949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.523140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.523180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.523210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.523238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.523973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.524003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.524033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.524071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.524106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.524138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.524171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.524201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.524230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.786 [2024-07-12 10:48:36.524261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.524289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.524319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.524351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.524383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.524412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.524444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.524475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.524503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.524535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.524562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.524591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.524625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.524653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.524682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.524710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.524739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.524772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.524800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.524828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.524858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.524888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.524918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.524945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.524971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.525006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.525032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.525065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.525093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.525126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.525154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.525182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.525211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.525250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.525280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.525306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.525334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.525362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.525390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.525420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.525450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.525497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.525528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.525556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.525587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.525618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.525669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.525700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.525729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.525757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.525785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.525819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.525849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.525878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.525914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.526037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.526068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.526103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.526134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.526164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.526192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.526221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.526256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.526287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.526314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.787 [2024-07-12 10:48:36.526345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.526375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.526435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.526467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.526496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.526525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.526555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.526588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.526616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.526647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.526672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.526702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.526734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.526763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.526806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.526836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.526865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.526894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.526921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.526952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.526978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.527009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.527040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.527069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.527101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.527131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.527160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.527190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.527223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.527250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.527278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.527308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.527341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.527369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.527400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.527428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.527456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.527486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.527514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.527543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.527597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.527627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.527658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.527691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.527722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.527751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.527783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.527811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.527842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.527874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.527898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.527925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.527954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.528366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.528397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.528425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.528452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.528484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.528514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.528541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.528571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.528600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.528630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.528661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.528698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.528730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.528759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.528787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.528817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.528846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.528873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.528902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.528934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.528965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.528995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.529027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.529055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.529083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.529118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.529156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.529181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.529210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.529240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.529271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.529301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.529330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.529364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.529396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.529426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.529453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.529482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.529513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.788 [2024-07-12 10:48:36.529542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.529570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.529598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.529631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.529658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.529688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.529720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.529750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.529780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.529809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.529839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.529871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.529898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.529930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.529968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.529997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.530024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.530052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.530079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.530108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.530145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.530175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.530209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.530250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.530278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.530414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.530451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.530480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.530509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.530536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.530565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.530596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.530625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.530662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.530693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.530722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.530771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.531016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.531046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.531075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.531126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.531155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.531185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.531214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.531243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.531279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.531309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.531338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.531392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.531422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.531454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.531487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.531516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.531549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.531579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.531608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.531640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.531669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.531700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.531731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.531760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.531789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.531823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.531852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.531882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.531910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.531935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.531969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.531995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.532026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.532064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.532093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.532119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.532154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.532183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.532223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.532255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.532284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.789 [2024-07-12 10:48:36.532310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.532342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.532371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.532411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.532440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.532470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.532500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.532530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.532560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.532589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.532999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.533029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.533056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.533084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.533114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.533148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.533178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.533211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.533240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.533272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.533304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.533331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.533362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.533391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.533422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.533452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.533480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.533508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.533537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.533561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.533595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.533625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.533654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.533685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.533713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.533739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.533771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.533805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.533832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.533861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.533886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.533915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.533943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.533970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.534006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.534038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.534068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.534096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.534127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.534158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.534187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.534215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.534246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.534277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.534306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.534336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.534389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.534419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.534449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.534478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.534507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.534569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.534598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.534628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.534663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.534693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.534725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.534754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.534781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.534811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.534841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.534875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.534911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.534941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.535089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.535117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.535148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 [2024-07-12 10:48:36.535177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.790 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:19.790 [2024-07-12 10:48:36.535205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.535236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.535264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.535294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.535324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.535353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.535394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.535423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.535654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.535684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.535711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.535745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.535775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.535804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.535835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.535864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.535893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.535921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.535950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.535988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.536019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.536049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.536085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.536116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.536163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.536194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.536224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.536252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.536282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.536313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.536342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.536377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.536404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.536433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.536463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.536491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.536520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.536547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.536587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.536615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.536650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.536680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.536709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.536739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.536764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.536794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.536824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.536855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.536884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.536912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.536939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.536966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.537001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.537033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.537062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.537092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.537118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.537154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.537185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.537221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.537623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.537657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.537687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.537717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.537744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.537772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.537797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.537829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.537854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.537878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.537908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.537934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.537961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.537988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.538018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.538045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.538079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.538106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.538133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.538161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.538187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.538214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.538241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.538272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.791 [2024-07-12 10:48:36.538300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.538329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.538361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.538390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.538420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.538449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.538479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.538507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.538544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.538571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.538596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.538625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.538656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.538686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.538714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.538740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.538770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.538795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.538826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.538853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.538878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.538904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.538932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.538960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.538993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.539033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.539058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.539083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.539106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.539135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.539165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.539198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.539222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.539245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.539268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.539294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.539318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.539343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.539367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.539390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.539532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.539564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.539595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.539626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.539655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.539685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.539716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.539746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.539776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.539805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.539835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.539863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.540096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.540129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.540159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.540198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.540227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.540261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.540289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.540319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.540348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.540376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.540405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.540433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.540464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.540496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.540524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.540555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.540588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.540619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.540650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.540677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.540711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.540742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.540771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.540801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.540833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.540866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.540894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.540921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.540951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.540976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.541011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.541039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.792 [2024-07-12 10:48:36.541080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.541109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.541140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.541168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.541197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.541229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.541257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.541289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.541319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.541346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.541376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.541403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.541432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.541460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.541484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.541511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.541543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.541571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.541600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.541627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.541997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.542032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.542060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.542088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.542118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.542147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.542175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.542206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.542235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.542262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.542291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.542330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.542362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.542395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.542424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.542453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.542482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.542510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.542540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.542573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.542603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.542632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.542659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.542688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.542719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.542749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.542780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.542808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.542837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.542867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.542897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.542928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.542955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.542984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.543014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.543044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.543074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.543104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.543141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.543172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.543201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.543230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.543273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.543302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.543331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.793 [2024-07-12 10:48:36.543362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.543392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 true 00:10:19.794 [2024-07-12 10:48:36.543426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.543457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.543483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.543510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.543540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.543571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.543598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.543625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.543650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.543677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.543709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.543738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.543769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.543796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.543824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.543866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.543898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.544021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.544047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.544079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.544114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.544147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.544177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.544208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.544237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.544267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.544297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.544328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.544716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.544747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.544776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.544831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.544863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.544894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.544923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.544952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.544979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.545009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.545036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.545064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.545091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.545117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.545145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.545176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.545204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.545232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.545259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.545287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.545315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.545338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.545367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.545397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.545425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.545453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.545480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.545509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.545540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.545569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.545596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.545622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.545651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.545678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.545706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.545735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.545762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.545789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.545820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.545846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.545873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.545902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.545931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.545963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.545991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.546022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.546048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.546077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.546105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.546137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.546169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.546201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.546590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.546621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.546652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.546681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.546711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.546761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.546790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.546823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.546851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.546877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.546904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.546931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.546958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.546987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.547013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.547040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.547068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.547095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.547125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.794 [2024-07-12 10:48:36.547153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.547188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.547217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.547266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.547294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.547335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.547362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.547392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.547420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.547449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.547477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.547506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.547532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.547560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.547593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.547621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.547652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.547681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.547708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.547738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.547769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.547798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.547828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.547853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.547879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.547906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.547932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.547960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.547986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.548011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.548038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.548066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.548093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.548125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.548156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.548196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.548227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.548255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.548285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.548315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.548344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.548379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.548412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.548445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.548481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.548623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.548651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.548681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.548711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.548740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.548776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.548811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.548845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.548870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.548896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.548926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.549421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.549453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.549483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.549510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.549540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.549571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.549599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.549631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.549662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.549691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.549725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.549755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.549785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.549809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.549841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.549872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.549903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.549933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.549964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.549991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.550020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.550047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.550079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.550115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.550145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.550172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.550203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.550232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.550263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.550291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.550321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.550354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.550384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.550415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.550444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.550474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.550504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.550535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.550571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.795 [2024-07-12 10:48:36.550602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.550632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.550660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.550689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.550719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.550747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.550779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.550807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.550835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.550874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.550903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.550939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.550968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.550997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.551022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.551052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.551084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.551115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.551153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.551180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.551213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.551241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.551272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.551323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.551354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.551486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.551540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.551571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.551601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.551632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.551726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.551758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.551789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.551818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.551846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.551875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.551902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.551934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.551964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.551996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.552027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.552057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.552091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.552124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.552153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.552182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.552213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.552242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.552273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.552304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.552337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.552366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.552403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.552432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.552461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.552489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.552520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.552553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.552582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.552612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.552644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.552673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.552723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.552750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.552780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.552810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.552838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.552869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.552897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.552948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.552975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.553002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.553030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.553060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.553088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.553117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.553149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.553177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.553209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.553242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.553272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.553302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.553333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.553363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.553397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.553432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.553468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.553502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.553930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.553971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.554002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.554034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.554061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.554090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.554118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.796 [2024-07-12 10:48:36.554148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.554178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.554209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.554238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.554266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.554293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.554321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.554350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.554378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.554406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.554435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.554464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.554498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.554526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.554558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.554587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.554615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.554643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.554669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.554696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.554723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.554753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.554782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.554807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.554842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.554871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.554897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.554923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.554952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.554982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.555009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.555036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.555064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.555091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.555120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.555163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.555191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.555219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.555245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.555276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.555305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.555337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.555367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.555401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.555426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.555455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.555481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.555506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.555530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.555555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.555583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.555615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.555646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.555676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.555705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.555735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.555768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.555901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.555941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.555970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.555999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.556030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.556278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.556308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.556342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.556373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.556400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.556431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.556459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.556502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.556532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.556561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.556594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.556622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.556654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.556685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.556713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.556748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.556777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.556813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.556843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.556872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.556904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.556932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.556960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.556990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.557021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.557079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.797 [2024-07-12 10:48:36.557107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.557138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.557167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.557206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.557238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.557267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.557294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.557323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.557349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.557373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.557405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.557440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.557477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.557508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.557539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.557567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.557596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.557623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.557650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.557678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.557715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.557754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.557778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.557806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.557838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.557865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.557892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.557920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.557949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.557978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.558009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.558040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.558411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.558440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.558488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.558519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.558548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.558584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.558615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.558645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.558677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.558705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.558735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.558763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.558791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.558818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.558848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.558882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.558911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.558941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.558969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.558999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.559029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.559058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.559107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.559138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.559168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.559198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.559227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.559261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.559290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.559319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.559347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.559378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.559417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.559445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.559474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.559504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.559530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.559560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.559585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.559613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.559642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.559669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.559698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.559734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.559764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.559792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.559822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.559850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.559884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.559919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.559947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.559972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.560001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.560039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.798 [2024-07-12 10:48:36.560068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.560095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.560128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.560158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.560184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.560214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.560248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.560273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.560303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.560332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.560469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.560500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.560527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.560554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.560582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.560832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.560867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.560895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.560927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.560957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.560983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.561010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.561042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.561073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.561109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.561143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.561185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.561213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.561243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.561272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.561299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.561323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.561356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.561388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.561419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.561450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.561479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.561508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.561537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.561578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.561605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.561635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.561664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.561692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.561721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.561749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.561778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.561807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.561833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.561861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.561890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.561917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.561942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.561973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.562004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.562033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.562062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.562091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.562120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.562152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.562181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.562212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.562241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.562271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.562326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.562354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.562383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.562411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.562441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.562492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.562522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.562552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.562579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.562942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.562974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.563003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.563034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.563063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.563090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.563119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.563149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.563177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.563205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.563233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.563265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.563294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.563322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.563350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.563376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.563403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.563430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.563460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.563495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.563525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.563553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.563581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.563611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.563640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.563674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.563704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.799 [2024-07-12 10:48:36.563742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.563771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.563800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.563828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.563855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.563884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.563910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.563938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.563964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.563989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.564018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.564051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.564088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.564115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.564147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.564174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.564199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.564228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.564258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.564286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.564322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.564356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.564381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.564410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.564450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.564487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.564516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.564544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.564572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.564609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.564633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.564666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.564699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.564729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.564758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.564790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.564820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.564964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.564994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.565021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.565048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.565078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.565330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.565362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.565391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.565420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.565447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.565480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.565507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.565539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.565569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.565596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.565623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.565646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.565678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.565709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.565743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.565779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.565814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.565850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.565886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.565914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.565941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.565969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.565998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.566024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.566048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.566076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.566104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.566134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.566163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.566190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.566217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.566248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.566277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.566306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.566335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.566365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.566396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.566427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.566456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.566484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.566513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.566544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.566574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.566606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.566634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.566665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.566693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.566722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.566751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.566783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.566813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.566841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.566872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.566900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.566928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.566957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.566996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.567033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.567554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.567587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.567616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.567655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.567683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.567712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.567740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.567770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.567798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.567824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.567852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.800 [2024-07-12 10:48:36.567879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.567906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.567933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.567961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.567987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.568015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.568042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.568069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.568097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.568129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.568156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.568183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.568213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.568241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.568268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.568294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.568319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.568348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.568377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.568405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.568433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.568461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.568489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.568522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.568551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.568592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.568621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.568650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.568677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.568709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.568737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.568764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.568793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.568820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.568847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.568875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.568902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.568932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.568959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.568987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.569014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.569042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.569069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.569098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.569130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.569161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.569189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.569213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.569242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.569271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.569297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.569323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.569349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.569472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.569503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.569532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:19.801 [2024-07-12 10:48:36.569556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.569588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.569831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.569864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.801 [2024-07-12 10:48:36.569896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.569935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.569963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.569989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 Message suppressed 999 times: [2024-07-12 10:48:36.570021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 Read completed with error (sct=0, sc=15) 00:10:19.801 [2024-07-12 10:48:36.570056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.570084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.801 [2024-07-12 10:48:36.570115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.570147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.570175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.570202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.570231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.570259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.570286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.570314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.570343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.570372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.570401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.570429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.570457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.570484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.570512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.570541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.570567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.570596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.570624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.570651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.570677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.570710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.570741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.570769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.570799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.570827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.570851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.570882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.570910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.570943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.570977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.571010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.571043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.571074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.571098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.571135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.571167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.571195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.571224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.571252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.571279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.571309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.571333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.571358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.571386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.571415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.571440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.571464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.571488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.571930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.571959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.571991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.572022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.572050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.572077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.572107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.572140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.572178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.572207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.572236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.572268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.572296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.572326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.572357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.572389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.572416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.572444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.572471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.572502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.572550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.572580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.572610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.572638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.572668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.572701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.572731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.572759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.572786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.572815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.572855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.572884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.572918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.572947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.572977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.573004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.573033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.802 [2024-07-12 10:48:36.573065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.573095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.573126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.573156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.573185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.573210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.573241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.573272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.573300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.573330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.573368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.573398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.573428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.573459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.573487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.573515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.573554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.573582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.573608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.573640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.573672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.573700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.573730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.573760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.573791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.573820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.573850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.573980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.574009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.574039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.574083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.574114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.574360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.574395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.574422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.574479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.574506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.574536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.574582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.574612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.574640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.574670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.574700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.574731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.574762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.574794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.574823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.574851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.574880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.574907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.574961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.574990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.575023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.575055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.575082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.575112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.575141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.575169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.575199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.575237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.575275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.575304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.575328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.575359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.575393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.575428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.575456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.575490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.575521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.575553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.575581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.575610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.575639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.575675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.575708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.575733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.575761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.575790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.575817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.575846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.575878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.575905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.575932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.575961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.575990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.576020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.576047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.576076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.576105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.576136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.576496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.576527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.576563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.576592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.576619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.576648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.576678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.576702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.576735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.576765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.576794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.576829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.576857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.576886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.576918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.803 [2024-07-12 10:48:36.576951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.576986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.577015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.577044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.577069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.577098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.577131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.577162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.577192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.577219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.577244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.577282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.577308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.577338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.577362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.577387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.577413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.577442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.577474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.577501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.577530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.577560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.577589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.577619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.577649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.577677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.577712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.577743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.577772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.577799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.577827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.577855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.577884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.577924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.577953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.577984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.578021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.578049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.578080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.578108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.578136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.578162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.578192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.578220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.578251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.578281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.578314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.578344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.578378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.578508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.578536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.578565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.578595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.578628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.578876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.578905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.578934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.578962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.579007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.579036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.579066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.579095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.579127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.579160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.579191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.579222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.579251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.579280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.579341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.579372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.579403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.579429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.579456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.579484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.579514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.579549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.579577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.579607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.579652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.579683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.579712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.579741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.579767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.579797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.579826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.579859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.579887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.579916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.579944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.579974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.580002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.580032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.580069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.580106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.580138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.580165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.580194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.580223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.580253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.580279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.580305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.580333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.580361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.580396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.580429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.804 [2024-07-12 10:48:36.580462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.580496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.580524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.580549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.580581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.580615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.580647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.581071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.581098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.581127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.581156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.581183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.581216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.581245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.581275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.581303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.581333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.581363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.581391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.581420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.581447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.581478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.581510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.581542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.581574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.581603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.581630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.581684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.581712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.581741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.581774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.581799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.581831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.581863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.581895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.581925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.581953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.581983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.582012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.582052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.582082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.582110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.582141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.582169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.582194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.582222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.582251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.582280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.582309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.582339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.582371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.582400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.582427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.582458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.582492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.582521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.582552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.582582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.582612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.582643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.582671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.582702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.582735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.582763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.582791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.582826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.582863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.582889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.582916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.582947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.582974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.583112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.583149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.583177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.583205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.583235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.583479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.583510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.583541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.583572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.583602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.583631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.583668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.583697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.583726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.583759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.583786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.583836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.583864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.583896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.583927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.583955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.583987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.584019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.584049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.584077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.584105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.584164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.584192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.584220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.584251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.584277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.584305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.584334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.584360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.805 [2024-07-12 10:48:36.584387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.584419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.584449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.584474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.584500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.584530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.584558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.584593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.584627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.584654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.584682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.584711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.584739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.584763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.584793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.584821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.584848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.584879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.584910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.584942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.584971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.585000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.585034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.585065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.585097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.585130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.585157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.585185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.585215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.585560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.585590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.585624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.585652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.585682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.585730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.585760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.585791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.585821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.585849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.585880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.585912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.585943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.585974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.586002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.586036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.586067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.586096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.586127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.586158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.586197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.586226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.586253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.586282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.586311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.586342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.586370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.586399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.586427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.586457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.586485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.586509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.586542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.586573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.586613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.586645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.586674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.586701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.586730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.586759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.586786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.586818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.586858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.586885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.586912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.586944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.586972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.587011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.587039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.587068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.587098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.587129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.587158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.587187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.587212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.587241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.587267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.587301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.587328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.587360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.587390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.587419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.587448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.587478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.587609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.806 [2024-07-12 10:48:36.587639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.587670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.587699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.587742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.587998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.588027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.588057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.588090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.588119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.588153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.588181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.588212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.588241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.588270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.588301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.588330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.588358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.588389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.588419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.588451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.588482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.588515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.588551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.588581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.588611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.588662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.588692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.588723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.588762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.588795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.588823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.588851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.588877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.588901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.588931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.588969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.589003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.589030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.589058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.589088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.589117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.589155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.589186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.589216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.589244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.589271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.589302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.589333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.589362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.589388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.589420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.589446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.589479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.589507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.589535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.589563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.589594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.589625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.589656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.589688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.589718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.589746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.590106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.590156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.590186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.590215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.590247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.590274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.590306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.590336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.590370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.590399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.590427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.590457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.590485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.590538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.590563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.590590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.590619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.590651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.590682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.590709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.590740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.590779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.590809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.590840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.590873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.590905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.590934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.590962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.590992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.591020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.591047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.591078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.591106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.591134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.591167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.591190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.807 [2024-07-12 10:48:36.591215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.591238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.591262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.591286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.591311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.591341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.591370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.591399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.591429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.591460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.591491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.591522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.591552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.591580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.591609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.591638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.591668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.591696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.591726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.591756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.591783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.591814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.591842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.591870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.591916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.591944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.591975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.592002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.592131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.592163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.592195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.592232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.592263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.592535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.592566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.592594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.592623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.592650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.592681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.592707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.592737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.592772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.592803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.592832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.592861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.592889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.592922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.592950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.592979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.593008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.593036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.593064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.593100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.593141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.593172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.593199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.593227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.593254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.593285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.593315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.593344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.593373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.593400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.593429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.593458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.593488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.593527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.593561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.593590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.593614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.593645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.593674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.593713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.593745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.593773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.593803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.593830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.593857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.593883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.593911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.593941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.593970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.593997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.594028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.594059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.594089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.594117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.594150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.594178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.594212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.594242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.594271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.594621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.594652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.594682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.594712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.594740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.594772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.594802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.594834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.594866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.594892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.594946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.594975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.595004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.808 [2024-07-12 10:48:36.595034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.595065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.595093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.595128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.595164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.595195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.595222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.595256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.595286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.595316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.595346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.595378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.595416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.595451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.595476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.595510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.595543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.595570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.595598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.595626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.595661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.595690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.595717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.595747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.595773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.595803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.595831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.595857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.595886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.595916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.595944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.595973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.595998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.596027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.596054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.596084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.596111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.596143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.596173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.596196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.596224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.596251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.596280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.596312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.596343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.596371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.596399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.596429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.596458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.596487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.596519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.596647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.596676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.596704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.596732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.596759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.597015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.597045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.597075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.597104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.597139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.597170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.597199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.597231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.597258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.597296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.597325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.597354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.597381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.597406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.597439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.597469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.597502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.597530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.597571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.597599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.597624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.597655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.597684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.597711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.597737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.597767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.597797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.597829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.597858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.597891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.597918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.597948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.597977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.598005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.598036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.598064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.598094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.598128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.598158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.598186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.598214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.598241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.598274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.598302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.598337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.598366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.598398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.598425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.598453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.809 [2024-07-12 10:48:36.598486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.598516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.598557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.598588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.598618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.598651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.598680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.598709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.598743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.598777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.599143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.599173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.599205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.599235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.599263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.599292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.599322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.599350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.599382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.599413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.599446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.599476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.599504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.599537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.599565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.599594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.599625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.599656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.599685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.599713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.599746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.599776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.599824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.599854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.599883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.599912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.599943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.599974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.600004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.600035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.600063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.600094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.600133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.600164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.600190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.600220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.600250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.600281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.600309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.600340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.600367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.600394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.600418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.600445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.600475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.600502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.600533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.600562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.600590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.600623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.600649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.600679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.600738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.600766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.600795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.600823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.600852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.600881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.600911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.600946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.600975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.601003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.601037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.601067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.601199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.601242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.601271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.601306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.601544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.601574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.601601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.601638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.601666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.601693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.601724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.601754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.601783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.601813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.601857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.601883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.601914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.601941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.601968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.602005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.602034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.602062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.602092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.810 [2024-07-12 10:48:36.602125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.602155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.602183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.602214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.602242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.602290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.602319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.602346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.602382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.602414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.602440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.602468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.602495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.602522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.602553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.602579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.602608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.602638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.602670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.602697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.602724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.602755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.602786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.602816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.602846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.602874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.602903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.602933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.602961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.603021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.603052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.603082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.603113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.603144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.603176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.603206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.603236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.603269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.603300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.603330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.603671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.603701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.603735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.603764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.603795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.603822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.603850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.603877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.603911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.603952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.603980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.604007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.604038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.604070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.604097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.604124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.604148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.604172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.604202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.604233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.604265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.604298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.604326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.604358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.604387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.604417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.604446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.604475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.604507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.604537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.604565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.604597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.604624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.604654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.604682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.604713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.604742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.604770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.604807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.604837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.604862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.604896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.604928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.604958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.604990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.811 [2024-07-12 10:48:36.605019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.605049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.605079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.605111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.605141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.605172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.605202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.605231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.605259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.605291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.605317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.605350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.605379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.605405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.605435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.605466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.605516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.605546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.605572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.605708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.605736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.605769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.605798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:19.812 [2024-07-12 10:48:36.606043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.606074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.606102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.606134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.606167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.606197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.606225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.606255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.606287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.606316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.606349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.606383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.606413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.606442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.606474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.606517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.606548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.606576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.606604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.606631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.606660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.606688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.606713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.606741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.606771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.606799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.606831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.606860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.606889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.606922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.606953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.606983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.607014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.607041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.607069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.607106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.607136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.607177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.607206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.607239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.607270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.607298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.607330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.607361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.607410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.607437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.607472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.607500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.607527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.607556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.607585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.607619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.607647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.607680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.607710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.607739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.607771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.607798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.607859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.608216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.608249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.608278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.608307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.608340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.608370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.608402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.608431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.608461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.608489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.608516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.608543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.608572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.608608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.608642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.608671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.608701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.608728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.608760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.608788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.608815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.608843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.812 [2024-07-12 10:48:36.608880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.608919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.608948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.608978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.609002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.609033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.609065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.609094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.609125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.609159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.609190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.609220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.609250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.609277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.609307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.609336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.609364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.609389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.609422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.609455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.609484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.609517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.609547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.609586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.609615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.609647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.609675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.609704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.609734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.609763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.609804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.609833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.609863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.609894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.609923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.609951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.609982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.610030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.610059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.610090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.610118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.610147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.610273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.610302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.610331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.610357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.610611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.610640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.610668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.610700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.610728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.610756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.610785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.610813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.610840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.610868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.610898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.610925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.610989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.611018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.611047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.611078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.611107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.611138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.611168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.611197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.611228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.611255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.611282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.611309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.611338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.611367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.611398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.611428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.611456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.611486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.611514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.611542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.611572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.611602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.611632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.611662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.611698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.611727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.611756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.611784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.611812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.611852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.611881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.611912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.611940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.611968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.611997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.612024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.612055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.612081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.612107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.612137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.612164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.612201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.612231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.612262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.612294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.612324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.612352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.813 [2024-07-12 10:48:36.612705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.612734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.612761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.612792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.612824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.612858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.612886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.612914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.612943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.612973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.613004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.613034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.613065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.613094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.613126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.613154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.613183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.613210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.613240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.613269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.613322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.613353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.613387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.613415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.613442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.613472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.613501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.613530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.613558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.613617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.613646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.613678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.613707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.613732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.613764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.613798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.613828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.613855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.613886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.613915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.613943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.613971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.614007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.614036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.614061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.614093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.614123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.614155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.614185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.614213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.614243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.614272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.614299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.614333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.614364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.614390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.614413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.614442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.614473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.614501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.614531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.614563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.614593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.614624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.614754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.614783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.614813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.614841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.615089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.615118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.615148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.615179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.615206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.615255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.615285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.615314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.615349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.615375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.615419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.615449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.615478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.615509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.615538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.615565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.615594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.615625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.615656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.615685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.615721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.615750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.615785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.615814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.615843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.615871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.615899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.615931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.615962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.615995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.616025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.616055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.616095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.616127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.616156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.616187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.616217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.814 [2024-07-12 10:48:36.616261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.616290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.616320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.616353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.616382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.616410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.616439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.616475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.616504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.616533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.616557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.616589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.616621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.616660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.616690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.616718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.616746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.616773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.616801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.616833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.616872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.616902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.617262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.617295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.617324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.617352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.617380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.617408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.617438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.617467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.617497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.617525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.617556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.617585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.617612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.617653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.617683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.617712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.617740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.617767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.617793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.617822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.617873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.617902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.617932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.617963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.617993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.618027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.618056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.618090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.618118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.618150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.618182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.618210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.618258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.618287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.618317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.618349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.618376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.618407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.618437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.618466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.618499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.618530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.618559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.618585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.618615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.618641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.618680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.618704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.618736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.618767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.618796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.618826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.618856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.618883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.618911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.618951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.618979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.619009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.619037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.619068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.619097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.619131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.619159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.619186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.619318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.619346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.619394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.815 [2024-07-12 10:48:36.619421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.619840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.619871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.619902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.619932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.619960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.619993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.620028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.620059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.620103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.620134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.620165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.620198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.620227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.620260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.620291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.620319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.620350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.620380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.620410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.620440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.620472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.620501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.620529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.620564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.620594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.620633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.620662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.620690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.620719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.620750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.620779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.620810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.620837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.620861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.620893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.620925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.620953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.620982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.621010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.621041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.621076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.621107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.621137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.621165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.621193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.621222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.621246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.621274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.621303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.621335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.621363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.621392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.621421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.621451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.621482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.621514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.621544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.621571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.621600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.621958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.621986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.622014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.622047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.622076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.622108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.622139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.622166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.622195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.622225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.622256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.622286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.622317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.622346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.622374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.622402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.622431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.622459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.622490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.622519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.622548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.622576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.622613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.622641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.622671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.622696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.622725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.622760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.622789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.622819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.622849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.622878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.622909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.622936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.622969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.622998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.623026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.623081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.623109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.623139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.623167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.623210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.623239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.623269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.623298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.816 [2024-07-12 10:48:36.623324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.623352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.623379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.623404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.623431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.623468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.623496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.623520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.623551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.623580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.623607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.623637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.623664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.623701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.623735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.623763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.623792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.623820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.623847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.624004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.624034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.624061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.624090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.624334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.624364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.624393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.624422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.624457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.624485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.624515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.624543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.624571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.624600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.624630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.624660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.624689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.624718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.624743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.624777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.624809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.624839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.624867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.624892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.624922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.624951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.624978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.625007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.625031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.625055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.625079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.625103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.625131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.625156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.625180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.625204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.625231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.625264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.625295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.625326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.625356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.625387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.625417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.625448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.625479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.625507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.625538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.625566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.625594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.625627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.625658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.625688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.625720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.625750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.625810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.625841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.625872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.625905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.625933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.625967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.625996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.626024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.626053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.626395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.626427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.626455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.626485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.626513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.626554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.626579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.626609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.626638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.626667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.626698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.626729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.626760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.626790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.626819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.626848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.626877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.626913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.626941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.626975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.627002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.627031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.627066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.817 [2024-07-12 10:48:36.627094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.627126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.627158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.627187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.627220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.627249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.627280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.627311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.627338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.627387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.627417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.627447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.627479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.627508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.627540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.627571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.627600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.627660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.627689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.627719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.627749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.627779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.627807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.627836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.627866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.627894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.627922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.627954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.627983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.628012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.628042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.628074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.628109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.628141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.628169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.628199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.628226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.628267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.628298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.628326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.628353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.628479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.628509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.628539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.628568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.628809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.628841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.628870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.628903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.628934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.628967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.628998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.629026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.629054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.629084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.629124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.629162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.629190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.629217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.629247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.629275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.629304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.629328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.629355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.629383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.629411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.629447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.629476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.629502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.629531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.629558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.629586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.629615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.629639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.629668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.629698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.629728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.629759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.629787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.629820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.629851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.629877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.629902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.629931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.629956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.629980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.630005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.630028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.630054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.630085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.630117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.630150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.630181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.630210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.630238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.630267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.630293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.630323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.630356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.630381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.630405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.630429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.630460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.630492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.630848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.818 [2024-07-12 10:48:36.630877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.630909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.630938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.630968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.630998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.631025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.631053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.631080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.631110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.631142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.631169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.631198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.631231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.631261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.631292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.631323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.631353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.631382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.631416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.631444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.631473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.631501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.631532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.631574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.631603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.631634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.631666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.631697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.631730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.631760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.631787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.631814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.631841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.631873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.631904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.631931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.631966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.631995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.632024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.632053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.632082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.632111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.632143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.632173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.632200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.632231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.632266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.632295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.632325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.632354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.632382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.632424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.632455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.632485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.632516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.632546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.632575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.632603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.632634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.632695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.632723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.632752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.632781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.632908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.632946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.632976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.633008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.633253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.633283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.633312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.633341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.633382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.633415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.633444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.633473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.633502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.633532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.633561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.633592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.633619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.633650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.633678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.633707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.633734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.819 [2024-07-12 10:48:36.633765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.633795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.633824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.633853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.633893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.633922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.633951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.633976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.634008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.634036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.634063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.634092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.634120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.634149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.634178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.634213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.634247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.634271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.634300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.634329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.634356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.634384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.634410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.634439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.634468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.634496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.634526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.634556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.634584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.634612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.634640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.634666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.634695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.634726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.634754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.634783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.634809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.634837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.634867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.634894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.634923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.634953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.635336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.635362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.635387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.635412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.635435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.635459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.635483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.635507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.635532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.635555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.635584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.635612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.635641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.635672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.635702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.635732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.635762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.635791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.635821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.635851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.635898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.635928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.635958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.635987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.636015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.636045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.636076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.636106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.636137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.820 [2024-07-12 10:48:36.636169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.636199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.636229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.636259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.636288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.636320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.636353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.636382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.636410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.636444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.636473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.636505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.636532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.636562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.636589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.636621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.636681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.636712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.636739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.636768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.636796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.636825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.636854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.636882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.636911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.636939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.636975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.637004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.637034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.637063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.637091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.637118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.637150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.637183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.637212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.637336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.637392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.637422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.637450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.637696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.637725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.637754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.637781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.637809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.637836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.637867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.637902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.637932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.637957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.637990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.638021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.638050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.638078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.638104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.638134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.638165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.638192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.638219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.638248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.638275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.638304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.638336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.638363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.638393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.638425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.638454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.638485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.638516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.638544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.638580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.638608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.638638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.638664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.638694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.638746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.638774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.638809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.638840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.638870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.638898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.638928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.638959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.638991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.639017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.639057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.639086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.639113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.639144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.639173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.639228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.639257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.639285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.639316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.639345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.639400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.639430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.639461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.639494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.639840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.639871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.639899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.639932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.639969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.821 [2024-07-12 10:48:36.640004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.640035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.640061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.640088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.640116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.640150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.640179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.640209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.640236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.640266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.640294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.640323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.640353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.640380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.640408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.640432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.640462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.640493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.640518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.640550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.640575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.640599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.640626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.640663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.640691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.640716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.640745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.640775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.640804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.640836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.640868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.640895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.640925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.640953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.641009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.641038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.641068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.641096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.641126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.641158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.641186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.641217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.641247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.641272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.641303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.641334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.641363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.641393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.641427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.641458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.641488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.641520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.641549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.641578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.641608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.641637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.641665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.641694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.641726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.641857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.641886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.641919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.641949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:19.822 [2024-07-12 10:48:36.642185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.642218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.642246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.642280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.642318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.642342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.642371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.642401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.642432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.642463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.642493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.642521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.642551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.642579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.642608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.642643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.642674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.642716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.642745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.642773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.642801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.642830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.642863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.642890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.642918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.642950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.642978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.643006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.643034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.643064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.643094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.643131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.643162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.643192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.643220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.643249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.643278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.643312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.643339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.643369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.643400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.822 [2024-07-12 10:48:36.643428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.643470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.643501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.643530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.643559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.643588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.643617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.643644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.643678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.643706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.643735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.643765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.643793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.643826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.643853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.643885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.643912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.643944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.644293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.644325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.644357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.644387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.644416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.644450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.644477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.644505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.644543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.644571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.644601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.644628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.644652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.644683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.644720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.644752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.644779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.644808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.644836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.644864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.644892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.644920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.644945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.644972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.645000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.645032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.645063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.645092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.645120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.645152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.645180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.645209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.645237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.645267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.645295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.645331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.645362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.645393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.645426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.645455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.645485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.645518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.645546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.645579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.645610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.645640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.645669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.645697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.645723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.645750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.645782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.645813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.645842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.645868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.645895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.645926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.645953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.645988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.646012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.646044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.646071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.646100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.646131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.646157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.646290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.646315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.646340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.646364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.647051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.647083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.647114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.647150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.647178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.647206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.647233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.647260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.647291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.647321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.647353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.647381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.647416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.647450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.647479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.647507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.647531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.647561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.647591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.823 [2024-07-12 10:48:36.647620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.647655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.647685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.647726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.647754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.647782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.647814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.647843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.647872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.647903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.647933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.647961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.647990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.648030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.648059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.648090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.648127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.648155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.648207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.648235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.648264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.648295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.648324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.648356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.648388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.648419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.648450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.648482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.648534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.648563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.648590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.648619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.648647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.648678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.648707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.648736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.648770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.648797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.648828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.648855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.648994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.649022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.649051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.649081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.649111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.649150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.649182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.649211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.649238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.649270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.649298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.649326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.649351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.649381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.649423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.649451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.649478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.649508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.649537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.649579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.649608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.649638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.649666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.649695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.649755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.649784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.649814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.649842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.649872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.649902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.649932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.649962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.649991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.650018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.650052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.650079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.650106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.650139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.650175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.650211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.650245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.650272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.650303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.650330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.650357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.650383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.650414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.650444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.650474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.650502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.650531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.824 [2024-07-12 10:48:36.650559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.650588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.650619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.650649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.650678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.650707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.650735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.650765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.650796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.650827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.650858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.650888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.650922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.651053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.651084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.651116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.651145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.651390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.651419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.651444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.651472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.651496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.651521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.651547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.651571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.651595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.651624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.651652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.651682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.651709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.651734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.651764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.651788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.651811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.651835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.651868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.651897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.651928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.651959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.651991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.652022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.652050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.652079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.652107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.652143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.652173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.652198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.652222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.652245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.652269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.652293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.652318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.652347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.652376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.652404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.652431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.652461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.652495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.652525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.652556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.652583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.652611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.652640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.652668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.652701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.652730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.652760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.652793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.652823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.652856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.652886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.652916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.652955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.652986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.653015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.653043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.653399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.653430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.653456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.653485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.653513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.653543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.653572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.653598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.653627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.653657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.653683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.653715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.653748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.653777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.653805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.653835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.653873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.653901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.653929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.653953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.653985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.654017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.654047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.654077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.654107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.654139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.654167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.654207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.654238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.825 [2024-07-12 10:48:36.654266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.654294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.654321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.654351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.654381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.654412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.654442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.654474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.654504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.654565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.654595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.654623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.654656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.654685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.654720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.654749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.654777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.654809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.654838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.654871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.654901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.654929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.654979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.655007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.655037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.655065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.655095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.655129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.655157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.655191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.655222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.655252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.655284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.655313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.655356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.655485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.655516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.655574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.655604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.655841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.655870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.655901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.655937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.655966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.655994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.656023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.656054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.656086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.656115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.656156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.656184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.656213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.656241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.656267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.656295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.656323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.656360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.656389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.656416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.656444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.656473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.656502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.656530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.656561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.656591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.656617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.656648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.656675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.656704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.656734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.656761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.656785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.656812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.656844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.656868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.656903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.656934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.656963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.656990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.657016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.657044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.657068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.657094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.657117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.657144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.657167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.657191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.657214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.657238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.657263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.657287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.657311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.657339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.657370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.657395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.657420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.657444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.657468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.658120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.658153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.658184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.658212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.658242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.658284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-07-12 10:48:36.658314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.658342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.658379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.658409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.658443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.658473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.658502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.658537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.658566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.658595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.658621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.658651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.658676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.658709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.658739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.658768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.658796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.658825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.658854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.658895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.658923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.658953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.658980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.659006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.659035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.659064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.659092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.659124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.659158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.659186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.659215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.659242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.659269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.659302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.659332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.659364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.659391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.659423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.659453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.659483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.659514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.659543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.659571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.659600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.659629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.659659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.659688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.659719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.659755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.659785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.659814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.659845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.659875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.659906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.659937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.659966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.659996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.660024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.660162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.660191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.660220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.660248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.660490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.660520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.660550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.660578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.660605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.660641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.660672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.660703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.660731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.660761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.660797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.660827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.660855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.660881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.660916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.660945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.660978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.661010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.661038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.661067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.661097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.661127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.661156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.661184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.661219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.661244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.661275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.661304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.661338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.661374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.661404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.661432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.661462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.661490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.661517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.661549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.661589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.661617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.661648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.661678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.661708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.661739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.661766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-07-12 10:48:36.661796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.661826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.661855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.661881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.661909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.661939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.661968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.661998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.662028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.662059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.662086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.662117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.662149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.662179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.662206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.662232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.662622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.662652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.662677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.662702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.662726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.662750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.662779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.662808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.662842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.662872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.662901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.662932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.662963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.662995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.663019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.663049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.663078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.663107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.663137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.663166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.663195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.663228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.663258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.663287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.663316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.663344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.663407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.663437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.663465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.663496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.663526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.663554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.663596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.663626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.663656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.663684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.663712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.663737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.663762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.663786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.663810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.663834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.663858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.663884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.663914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.663946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.663973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.664003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.664029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.664074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.664104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.664137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.664169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.664199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.664233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.664265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.664294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.664324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.664353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.664381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.664409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.664438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.664493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.664521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.664653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.664683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.664713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.664744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.664994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.665024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.665054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.665082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.665113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.665145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-07-12 10:48:36.665176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.665207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.665235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.665266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.665300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.665328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.665356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.665384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.665413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.665440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.665476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.665506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.665534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.665563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.665595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.665625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.665660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.665688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.665719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.665747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.665780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.665815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.665847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.665875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.665906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.665931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.665960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.665990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.666020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.666048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.666076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.666104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.666135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.666163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.666205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.666235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.666267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.666296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.666322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.666353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.666383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.666413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.666444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.666474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.666517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.666548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.666577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.666624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.666656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.666686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.666720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.666749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.666777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.667126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.667155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.667186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.667215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.667248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.667283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.667312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.667342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.667370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.667397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.667440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.667469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.667495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.667524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.667555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.667587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.667617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.667643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.667674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.667705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.667735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.667762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.667796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.667820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.667845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.667871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.667902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.667933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.667963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.667992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.668021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.668049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.668080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.668109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.668139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.668168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.668198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.668234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.668263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.668293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.668321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.668352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.668386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.668414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.668470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.668500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.668528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.668563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.668593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.668624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.668653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.668682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-07-12 10:48:36.668712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.668739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.668770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.668803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.668835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.668866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.668900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.668931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.668964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.668993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.669020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.669050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.669183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.669213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.669244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.669273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.669527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.669558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.669587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.669619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.669649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.669683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.669714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.669743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.669776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.669806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.669835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.669862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.669894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.669930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.669960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.669990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.670025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.670053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.670103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.670133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.670162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.670190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.670221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.670250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.670286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.670314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.670344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.670372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.670400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.670427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.670452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.670477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.670504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.670533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.670562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.670599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.670632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.670660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.670691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.670722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.670752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.670780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.670806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.670833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.670864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.670894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.670925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.670955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.670986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.671016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.671046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.671075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.671105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.671137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.671164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.671199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.671230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.671258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.671289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.671639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.671669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.671700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.671728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.671757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.671821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.671851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.671883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.671919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.671947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.671977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.672005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.672033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.672061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.672087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.672113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.672145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.672175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.672204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.672236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.672266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.672305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.672335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.672365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.672397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.672427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.672465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-07-12 10:48:36.672495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.672524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.672551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.672582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.672612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.672642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.672670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.672700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.672729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.672757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.672785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.672811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.672842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.672874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.672903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.672933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.672965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.672995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.673025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.673052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.673087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.673115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.673151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.673181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.673214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.673246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.673274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.673303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.673333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.673362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.673393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.673423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.673452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.673480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.673504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.673537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.673570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.673715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.673746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.673775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.673805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.674052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.674092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.674125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.674155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.674184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.674214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.674245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.674279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.674308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.674347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.674376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.674406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.674436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.674464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.674495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.674523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.674553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.674590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.674620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.674655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.674685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.674714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.674744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.674774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.674803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.674833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.674863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.674894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.674925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.674953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.674984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.675013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.675053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.675082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.675109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.675137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.675168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.675198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.675234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.675263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.675292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.675321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.675359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.675390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.675418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.675448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.675476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.675501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.675534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.675565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.675592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.675627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.675658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.675692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.675723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.675751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.675779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.675810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.675841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.676201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.676232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.676262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.676289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.676317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-07-12 10:48:36.676352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.676383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.676413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.676442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:19.832 [2024-07-12 10:48:36.676479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.676508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.676535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.676568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.676596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.676628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.676659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.676688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.676718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.676746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.676777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.676810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.676840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.676869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.676899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.676930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.676960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.676989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.677019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.677050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.677080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.677109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.677148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.677176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.677212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.677243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.677271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.677336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.677367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.677397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.677452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.677481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.677511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.677537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.677568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.677599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.677628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.677656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.677685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.677717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.677751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.677779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.677808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.677839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.677871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.677905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.677935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.677963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.677994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.678024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.678054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.678083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.678111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.678148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.678176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.678308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.678350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.678382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.678410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.678711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.678743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.678774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.678801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.678835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.678866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.678900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.678933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.678964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.678995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.679044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.679074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.679102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.679136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.679166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.679197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.679226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.679256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.679288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.679318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.679348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.679380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.679410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.679445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.679476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.679510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.679537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.679567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.679597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.679625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.679655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.679696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.679726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.679754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.679791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.679822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.679852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.679888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-07-12 10:48:36.679917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.679949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.679979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.680007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.680035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.680067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.680103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.680135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.680164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.680189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.680218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.680247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.680277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.680305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.680344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.680374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.680405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.680435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.680464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.680503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.680533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.681061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.681094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.681129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.681160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.681192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.681221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.681252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.681278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.681309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.681340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.681371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.681405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.681432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.681463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.681511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.681542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.681572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.681600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.681628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.681686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.681717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.681746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.681775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.681806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.681833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.681863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.681893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.681951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.681981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.682009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.682042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.682072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.682102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.682136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.682164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.682210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.682242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.682272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.682319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.682350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.682381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.682408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.682433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.682466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.682497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.682524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.682554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.682590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.682620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.682651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.682681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.682710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.682737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.682777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.682805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.682830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.682861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.682890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.682919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.682948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.682976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.683014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.683043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.683070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.683260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.683293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.683324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.683356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.683647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.683680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-07-12 10:48:36.683708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.683739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.683772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.683802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.683832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.683867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.683898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.683931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.683959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.683987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.684019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.684051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.684079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.684115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.684153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.684182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.684213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.684241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.684301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.684332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.684362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.684409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.684441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.684472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.684514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.684545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.684576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.684626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.684656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.684684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.684715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.684742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.684776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.684807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.684835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.684873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.684905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.684934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.684963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.684988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.685021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.685050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.685083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.685113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.685149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.685178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.685209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.685237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.685274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.685303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.685331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.685358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.685383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.685412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.685440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.685480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.685512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.685864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.685898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.685927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.685958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.685992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.686022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.686051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.686079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.686107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.686137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.686166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.686198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.686228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.686259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.686289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.686317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.686349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.686379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.686408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.686436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.686465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.686499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.686531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.686557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.686585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.686616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.686645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.686680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.686708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.686738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.686766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.686794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.686846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.686875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.686903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.686930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.686960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.686990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.687023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.687051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.687081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.687124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.687156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.687188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.687218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.687246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.687270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.687300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.687327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-07-12 10:48:36.687356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.687387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.687418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.687448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.687478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.687519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.687548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.687576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.687605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.687631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.687661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.687690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.687720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.687747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.687775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.687902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.687941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.687972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.688001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.688244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.688277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.688308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.688336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.688365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.688392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.688425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.688456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.688484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.688514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.688562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.688594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.688623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.688655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.688686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.688717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.688744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.688774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.688821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.688853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.688881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.688908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.688938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.688968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.688997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.689024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.689048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.689077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.689110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.689141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.689171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.689199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.689230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.689266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.689296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.689326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.689358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.689386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.689421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.689452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.689484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.689515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.689542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.689569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.689600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.689647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.689678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.689708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.689737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.689768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.689800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.689828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.689857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.689892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.689920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.689953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.690015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.690044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.690074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.690431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.690464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.690495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.690523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.690562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.690599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.690625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.690653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.690684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.690716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.690744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.690775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.690805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.690837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.690869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.690899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.690929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.690958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.690987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.691019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.691048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.691077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.691107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.691140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.691170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.691205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.691237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-07-12 10:48:36.691269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.691305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.691333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.691363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.691394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.691422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.691453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.691484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.691513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.691568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.691597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.691624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.691655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.691685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.691715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.691744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.691774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.691827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.691859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.691890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.691935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.691965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.691993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.692026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.692057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.692088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.692118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.692152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.692181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.692209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.692247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.692277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.692307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.692340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.692368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.692404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.692436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.692557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.692586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.692622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.692658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.692946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.692979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.693038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.693069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.693097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.693135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.693163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.693200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.693227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.693257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.693296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.693321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.693355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.693384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.693416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.693444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.693473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.693504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.693531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.693561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.693592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.693620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.693657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.693687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.693715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.693744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.693775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.693805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.693834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.693867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.693907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.693937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.693965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.693995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.694020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.694052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.694084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.694114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.694145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.694183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.694209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.694237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.694270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.694304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.694335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.694364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.694394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.694447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.694476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.694505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.694534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.694564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.694595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.694623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.694653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.694686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.694718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.694747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.694782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.695135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.695166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.695199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.695229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.695257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-07-12 10:48:36.695290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.695321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.695351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.695380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.695417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.695446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.695473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.695502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.695530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.695565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.695593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.695621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.695652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.695681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.695718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.695747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.695775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.695803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.695831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.695861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.695887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.695917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.695949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.695981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.696013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.696041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.696071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.696102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.696138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.696170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.696200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.696229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.696258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.696290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.696322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.696349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.696382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.696413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.696443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.696473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.696501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.696534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.696565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.696596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.696628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.696657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.696687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.696746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.696780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.696808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.696838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.696866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.696895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.696924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.696953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.696983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.697012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.697048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.697077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.697206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.697241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.697271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.697298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.697535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.697565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.697601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.697641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.697669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.697699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.697728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.697758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.697790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.697820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.697853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.697880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.697911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.697940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.697971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.698000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.698029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.698058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.698091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.698126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.698160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.698189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.698220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.698251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-07-12 10:48:36.698280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.698310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.698336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.698364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.698393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.698431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.698467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.698493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.698521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.698552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.698578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.698603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.698630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.698659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.698688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.698715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.698744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.698776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.698806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.698833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.698871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.698900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.698941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.698969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.698999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.699029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.699057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.699088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.699118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.699158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.699189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.699217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.699247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.699278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.699309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.699660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.699691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.699723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.699754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.699785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.699816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.699849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.699879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.699907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.699935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.699970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.700000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.700027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.700055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.700084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.700109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.700145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.700178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.700208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.700241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.700273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.700303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.700334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.700362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.700389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.700418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.700447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.700477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.700507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.700536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.700563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.700601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.700628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.700655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.700688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.700718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.700759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.700789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.700818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.700855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.700883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.700913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.700942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.700971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.701011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.701042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.701073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.701103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.701132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.701160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.701189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.701223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.701250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.701280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.701316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.701348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.701377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.701423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.701451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-07-12 10:48:36.701480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.701510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.701541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.701576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.701606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.701739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.701768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.701795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.701825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.702066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.702096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.702127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.702158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.702182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.702211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.702243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.702276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.702305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.702335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.702366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.702396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.702425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.702456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.702487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.702517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.702543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.702575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.702607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.702635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.702666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.702694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.702725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.702754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.702786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.702814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.702844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.702873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.702902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.702934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-07-12 10:48:36.702964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.702996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.703027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.703058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.703089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.703121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.703149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.703179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.703209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.703239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.703269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.703300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.703331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.703362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.703393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.703424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.703455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.703488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.703519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.703550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.703581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.703613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.703641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.703671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.703698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.703727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.703755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.703798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.703827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.704246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.704277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.704307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.704336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.704365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.704391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.704423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.704450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.704482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.704511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.704540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.704574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.704605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.704634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.704664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.704694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.704725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.704755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.704785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.129 [2024-07-12 10:48:36.704827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.704855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.704883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.704915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.704945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.704974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.705006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.705036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.705064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.705093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.705126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.705174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.705205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.705233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.705264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.705292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.705321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.705349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.705378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.705409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.705442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.705471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.705500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.705529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.705570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.705598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.705626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.705658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.705688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.705717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.705751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.705782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.705813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.705845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.705876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.705913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.705948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.705978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.706011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.706048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.706076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.706107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.706136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.706162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.706199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.706321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.706363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.706387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.706421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.706663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.706692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.706720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.706745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.706778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.706807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.706838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.706866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.706901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.706939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.706972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.707000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.707028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.707055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.707085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.707112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.707144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.707175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.707207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.707236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.707266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.707295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.707326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.707358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.707389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.707416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.707445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.707473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.707498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.707529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.707560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.707589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.707618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.707647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.707677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.707707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.707736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.707767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.707794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.707830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.707868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.707902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.707926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.707957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.707987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.708016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.708046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.708078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.708102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.708131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.708162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.708194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.708224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.708254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.708284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.708335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.130 [2024-07-12 10:48:36.708364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.708393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.708426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.708773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.708803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.708829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.708858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.708886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.708916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.708946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.708975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.709004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.709036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.709066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.709096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.709128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.709160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.709189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.709218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.709253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.709283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.709315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.709342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.709372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.709400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.709430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.709483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.709514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.709545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.709575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.709604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.709636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.709665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.709693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.709726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.709754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.709782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.709812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.709842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.709881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.709912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.709939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.709973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.710002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.710034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.710063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.710097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.710128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.710158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.710190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.710219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.710248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.710277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.710308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.710341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.710369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.710400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.710429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.710462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.710492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.710522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.710550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.710580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.710609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.710645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.710669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.710697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.710821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.710849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.710879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.710912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.711211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.711242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.711272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.711304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.711333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.711363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.711393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.711420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.711448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.711476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.711505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.711532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.711561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.711589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.711618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.711652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.711680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.711710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.711738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.711766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.711817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.711847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.711876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.711906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.711934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.711970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.712000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.712028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.712056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.712081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.712115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.712147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.712176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.712205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.131 [2024-07-12 10:48:36.712234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.712265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.712292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.712321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.712351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.712382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.712408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.712438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.712466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.712496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.712523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.712556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.712585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.712622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.712652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.712678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.712707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.712735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.712766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.712800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.712830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.712859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.712884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.712911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.712945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:20.132 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:20.132 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.132 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:20.132 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:20.132 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:20.132 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:20.132 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:20.132 [2024-07-12 10:48:36.886791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.886833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.886864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.886892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.886926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.886955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.887010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.887038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.887098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.887128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.887159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.887186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.887212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.887239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.887266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.887292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.887318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.887345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.887371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.887400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.887426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.887454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.887482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.887514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.887542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.887570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.887600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.887666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.887695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.887725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.887754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.887789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.887819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.887848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.887875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.887904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.887932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.887961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.887991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.888021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.888048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.888076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.888103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.888133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.888163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.888197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.888224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.888261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.888289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.888316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.888342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.888370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.888404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.888432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.888461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.888490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.888513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.888543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.888569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.888598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.888626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.888654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.888683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.888709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.889046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.889076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.889102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.889137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.889162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.889187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.889218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.889246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.889274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.889308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.132 [2024-07-12 10:48:36.889346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.889376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.889399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.889426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.889451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.889479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.889506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.889537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.889564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.889591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.889617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.889645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.889674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.889699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.889729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.889757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.889784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.889817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.889847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.889874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.889902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.889929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.889960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.889989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.890017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.890046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.890074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.890103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.890134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.890161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.890189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.890213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.890243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.890272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.890299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.890328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.890356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.890383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.890410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.890441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.890467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.890491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.890520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.890545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.890573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.890601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.890628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.890651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.890673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.890696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.890721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.890750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.890781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.891153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.891184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.891213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.891241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.891274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.891302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.891330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.891357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.891382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.891412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.891439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.891463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.891489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.891520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.891546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.891572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.891601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.891629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.891659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.891687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.891717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.891743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.891772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.891798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.891847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.891877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.891906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.891932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.891960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.891988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.892017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.892047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.133 [2024-07-12 10:48:36.892074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.892105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.892137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.892174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.892201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.892231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.892257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.892315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.892344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.892402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.892430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.892462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.892488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.892522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.892550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.892579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.892607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.892636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.892663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.892693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.892720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.892746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.892775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.892802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.892828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.892853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.892884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.892923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.892961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.893000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.893035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.893066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.893461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.893491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.893518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.893542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.893568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.893597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.893624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.893650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.893678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.893707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.893733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.893763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.893791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.893818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.893850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.893876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.893905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.893932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.893961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.893988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.894017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.894046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.894099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.894130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.894190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.894219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.894246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.894274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.894307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.894335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.894366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.894394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.894423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.894450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.894480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.894510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.894536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.894562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.894588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.894614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.894637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.894666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.894696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.894727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.894762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.894795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.894821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.894848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.894872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.894898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.894924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.894952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.894978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.895006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.895034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.895063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.895091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.895116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.895147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.895177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.895204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.895233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.895258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.895603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.895635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.895757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.895785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.895813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.895841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.895870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.895899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.895927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.134 [2024-07-12 10:48:36.895955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.895984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.896012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.896040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.896068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.896095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.896128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.896154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.896186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.896215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.896245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.896274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.896305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.896333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.896364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.896390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.896416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.896446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.896472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.896500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.896527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.896555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.896585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.896614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.896646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.896675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.896710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.896740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.896769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.896794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.896823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.896862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.896887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.896919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.896945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.896971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.896998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.897025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.897053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.897088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.897126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.897164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.897187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.897217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.897246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.897276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.897305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.897334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.897361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.897389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.897418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.897447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.897473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.897502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.897530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.897881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.897918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.897944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.897971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.898000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.898026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.898055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.898088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.898115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.898146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.898175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.898206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.898236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.898263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.898289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.898319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.898345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.898371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.898402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.898433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.898462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.898488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.898517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.898545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.898570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.898599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.898632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.898658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.898688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.898715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.898743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.898771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.898802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.898832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.898857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.898882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.898909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.898951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.898980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.899041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.899071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.899118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.899149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.899208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.899234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.899267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.899296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.899323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.899350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.899379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.899407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.135 [2024-07-12 10:48:36.899434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.899461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.899487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.899515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.899543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.899575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.899612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.899651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.899681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.899705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.899735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.899765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.900130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.900161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.900186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.900220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.900248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.900279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.900309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.900344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.900374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.900434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.900461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.900506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.900532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.900564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.900592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.900622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.900647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.900671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.900697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.900724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.900748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.900771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.900801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.900831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.900867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.900903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.900931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.900960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.900990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.901020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.901049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.901076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.901102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.901133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.901160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.901188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.901217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.901245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.901277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.901308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.901335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.901364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.901396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.901423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.901451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.901479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.901508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.901535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.901565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.901594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.901625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.901652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.901680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.901708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.901737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.901764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.901793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.901821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.901848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.901874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.901901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.901933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.901962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.901997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.902366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.902392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.902422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.902451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.902480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.902509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:20.136 [2024-07-12 10:48:36.902537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.902571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.902600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.902627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.902654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.902684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.902711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.902742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.902772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.902803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.902831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.902860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.902887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.902915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.902943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.902971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.902998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.903025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.903054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.903084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.903113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.903146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.903174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.136 [2024-07-12 10:48:36.903201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.903229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.903257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.903283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.903310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.903337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.903367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.903396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.903422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.903450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.903475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.903504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.903531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.903557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.903582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.903605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.903634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.903662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.903696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.903723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.903747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.903774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.903801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.903826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.903852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.903881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.903906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.903933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.903960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.903988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.904016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.904043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.904071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.904098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.904445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.904475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.904505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.904531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.904560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.904588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.904615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.904642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.904671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.904697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.904725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.904752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.904779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.904805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.904833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.904860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.904886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.904916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.904953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.904980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.905008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.905037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.905067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.905094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.905127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.905156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.905183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.905213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.905240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.905266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.905301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.905331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.905360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.905385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.905414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.905441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.905468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.905494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.905521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.905548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.905576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.905608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.905636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.905665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.905693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.905722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.905748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.905779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.905806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.905833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.905860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.905887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.905918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.905950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.905976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.906003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.906033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.906060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.906092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.906121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.906151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.906179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.906208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.906235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.906648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.137 [2024-07-12 10:48:36.906679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.906709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.906740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.906767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.906795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.906822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.906852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.906880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.906909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.906939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.906969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.907004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.907031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.907060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.907089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.907120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.907149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.907181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.907206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.907232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.907261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.907290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.907319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.907346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.907373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.907402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.907430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.907461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.907489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.907535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.907562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.907586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.907612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.907638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.907661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.907687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.907714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.907743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.907771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.907803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.907835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.907873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.907912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.907946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.907973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.908006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.908039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.908062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.908089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.908117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.908148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.908175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.908208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.908239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.908271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.908297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.908343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.908373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.908404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.908429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.908460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.908488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.908837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.908865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.908893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.908921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.908948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.908974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.909002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.909032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.909059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.909090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.909120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.909154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.909183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.909210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.909238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.909269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.909299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.909328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.909357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.909386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.909412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.909438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.909464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.909493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.909521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.909549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.909578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.909606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.909636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.909665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.909708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.909735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.909772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.909800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.138 [2024-07-12 10:48:36.909832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.909861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.909922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.909949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.909979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.910006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.910037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.910061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.910090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.910117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.910152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.910180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.910207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.910235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.910267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.910297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.910323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.910351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.910376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.910404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.910438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.910463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.910489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.910524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.910558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.910581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.910610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.910637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.910668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.910697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.911046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.911075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.911102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.911131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.911160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.911188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.911216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.911246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.911272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.911305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.911343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.911377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.911412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.911450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.911483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.911508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.911537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.911565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.911592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.911618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.911647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.911677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.911706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.911735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.911765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.911791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.911819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.911846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.911874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.911901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.911932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.911961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.911987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.912015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.912042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.912069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.912094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.912125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.912154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.912212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.912243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.912276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.912302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.912329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.912359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.912408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.912433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.912461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.912489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.912522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.912552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.912580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.912610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.912644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.912677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.912704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.912727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.912759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.912785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.912816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.912843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.912876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.912908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.913360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.913389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.913419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.913445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.913470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.913500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.913525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.913557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.913586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.913616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.913644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.913674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.139 [2024-07-12 10:48:36.913702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.913734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.913761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.913790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.913816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.913845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.913873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.913906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.913932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.913959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.913986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.914020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.914048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.914083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.914117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.914144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.914173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.914205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.914233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.914270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.914310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.914340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.914366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.914395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.914423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.914446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.914475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.914501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.914529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.914556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.914583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.914610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.914637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.914666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.914695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.914728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.914756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.914784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.914810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.914838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.914865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.914891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.914918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.914946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.914979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.915007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.915038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.915066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.915094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.915121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.915148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.915174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.915506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.915536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.915566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.915597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.915635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.915673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.915708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.915741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.915769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.915797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.915820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.915849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.915877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.915906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.915937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.915966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.915995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.916025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.916052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.916081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.916109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.916140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.916170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.916198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.916231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.916258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.916286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.916313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.916340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.916369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.916395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.916426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.916452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.916482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.916509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.916538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.916564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.916595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.916624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.916655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.916686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.916712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.916739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.916768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.916801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.916830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.916856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.916882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.916906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.916937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.916966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.916995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.917022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.140 [2024-07-12 10:48:36.917049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.917076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.917099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.917131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.917159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.917187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.917217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.917247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.917274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.917301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.917655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.917688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.917715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.917748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.917775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.917805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.917831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.917864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.917894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.917920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.917945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.917968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.917998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.918026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.918055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.918082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.918112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.918138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.918164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.918192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.918218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.918247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.918280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.918305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.918332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:20.141 [2024-07-12 10:48:36.918361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.918392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.918418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.918445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.918471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.918504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.918531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.918595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.918623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.918660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.918686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:20.141 [2024-07-12 10:48:36.918715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.918741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.918771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.918798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.918837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.918866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.918895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.918924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.918957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.918990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.919025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.919057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.919091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.919120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.919155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.919180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.919208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.919236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.919262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.919291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.919320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.919348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.919378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.919405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.919433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.919460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.919488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.919514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.919856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.919888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.919916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.919951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.919975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.920003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.920029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.920058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.920084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.920117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.920147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.920175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.920202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.920230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.920256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.920291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.920318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.920356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.920380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.920409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.920437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.920479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.920509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.920534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.920562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.920589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.920615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.920640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.141 [2024-07-12 10:48:36.920666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.920692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.920717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.920746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.920775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.920801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.920829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.920857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.920886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.920914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.920942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.920971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.921000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.921028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.921055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.921087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.921114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.921144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.921177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.921204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.921235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.921263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.921293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.921321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.921361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.921388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.921420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.921450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.921478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.921506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.921533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.921568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.921596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.921625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.921652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.921992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.922019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.922045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.922071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.922099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.922128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.922160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.922192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.922220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.922247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.922276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.922304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.922333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.922362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.922393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.922420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.922447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.922475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.922503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.922530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.922571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.922600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.922629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.922657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.922691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.922722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.922754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.922783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.922815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.922843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.922872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.922904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.922945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.922982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.923005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.923037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.923069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.923106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.923141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.923167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.923195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.923222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.923249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.923279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.923307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.923339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.923368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.923397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.923426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.923452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.923482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.923509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.923542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.923570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.923598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.142 [2024-07-12 10:48:36.923624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.923655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.923691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.923720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.923749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.923778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.923808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.923838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.923869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.924213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.924246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.924274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.924303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.924332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.924361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.924390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.924418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.924447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.924473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.924502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.924530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.924574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.924603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.924634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.924662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.924690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.924726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.924754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.924795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.924823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.924851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.924881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.924909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.924943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.924974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.925003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.925031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.925059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.925089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.925117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.925146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.925174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.925228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.925257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.925286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.925314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.925343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.925373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.925401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.925434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.925470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.925495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.925524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.925552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.925580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.925607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.925637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.925667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.925697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.925725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.925754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.925778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.925809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.925839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.925870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.925899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.925929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.925957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.925985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.926016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.926044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.926071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.926424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.926453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.926484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.926524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.926554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.926583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.926612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.926642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.926671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.926698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.926727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.926756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.926783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.926812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.926847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.926883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.926915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.926942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.926972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.927001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.927029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.927059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.927088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.927117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.927144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.927177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.927205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.927235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.927265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.927291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.927320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.927348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.927378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.143 [2024-07-12 10:48:36.927406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.927440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.927469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.927497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.927526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.927556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.927584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.927613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.927643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.927670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.927701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.927729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.927772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.927801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.927831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.927858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.927887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.927918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.927945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.927971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.927998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.928031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.928059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.928092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.928120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.928155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.928183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.928212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.928243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.928273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.928302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.928661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.928692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.928719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.928749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.928776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.928811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.928839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.928872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.928900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.928930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.928957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.928986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.929010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.929040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.929069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.929099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.929131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.929158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.929189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.929220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.929251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.929281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.929309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.929337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.929364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.929392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.929424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.929455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.929484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.929513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.929542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.929575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.929606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.929637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.929667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.929693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.929721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.929748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.929778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.929806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.929863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.929894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.929929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.929959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.929987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.930022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.930053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.930083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.930112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.930146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.930174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.930202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.930234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.930261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.930291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.930322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.930356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.930394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.930428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.930466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.930491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.930519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.930547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.930969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.930997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.931028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.931055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.931086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.931113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.931147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.931177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.931206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.931255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.931282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.931314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.144 [2024-07-12 10:48:36.931343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.931374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.931403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.931432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.931463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.931492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.931529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.931559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.931594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.931621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.931648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.931677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.931706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.931748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.931779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.931807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.931834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.931865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.931897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.931926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.931956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.931989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.932016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.932048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.932077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.932106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.932137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.932167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.932198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.932225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.932254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.932281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.932309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.932334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.932358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.932389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.932419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.932446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.932474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.932505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.932545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.932579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.932608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.932637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.932662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.932689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.932716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.932743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.932775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.932806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.932834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.932864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.933261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.933294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.933322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.933347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.933376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.933407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.933438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.933469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.933498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.933529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.933557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.933586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.933613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.933642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.933671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.933703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.933734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.933762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.933789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.933822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.933851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.933882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.933911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.933941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.933968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.933998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.934025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.934054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.934098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.934128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.934162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.934191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.934220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.934249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.934280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.934326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.934355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.934384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.934412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.934439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.934487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.934515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.934542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.934568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.934596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.934625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.934652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.934680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.934708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.934742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.934780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.934808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.934837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.934866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.145 [2024-07-12 10:48:36.934893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.934924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.934949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.934979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.935005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.935031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.935059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.935089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.935121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.935563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.935599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.935630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.935664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.935693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.935721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.935749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.935778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.935812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.935843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.935872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.935902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.935932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.935958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.935987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.936019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.936049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.936083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.936112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.936143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.936192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.936221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.936250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.936279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.936310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.936348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.936376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.936419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.936449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.936478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.936507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.936535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.936563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.936593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.936624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.936655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.936687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.936723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.936752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.936785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.936812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.936840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.936868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.936897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.936935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.936964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.936989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.937020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.937049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.937078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.937104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.937135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.937163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.937191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.937218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.937248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.937287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.937326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.937355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.937381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.937408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.937440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.937469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.937496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.937833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:20.146 [2024-07-12 10:48:36.937864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.937893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.937920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.937950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.937979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.938010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.938039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.938068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.938098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.938129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.938158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.938187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.938214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.938247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.938277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.146 [2024-07-12 10:48:36.938304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.938332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.938360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.938390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.938418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.938452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.938482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.938512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.938540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.938568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.938598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.938626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.938655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.938687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.938715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.938744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.938772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.938802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.938830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.938876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.938904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.938935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.938968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.938995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.939024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.939049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.939077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.939108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.939141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.939171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.939203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.939243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.939269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.939297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.939323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.939348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.939377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.939406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.939433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.939461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.939486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.939516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.939544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.939572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.939601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.939634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.939663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.940141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.940171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.940199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.940231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.940261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.940288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.940320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.940350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.940389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.940418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.940448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.940476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.940507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.940534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.940564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.940595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.940625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.940659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.940687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.940716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.940747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.940774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.940810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.940838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.940868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.940895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.940925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.940955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.940982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.941010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.941041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.941068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.941093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.941130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.941163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.941194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.941223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.941250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.941278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.941306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.941341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.941371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.941399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.941427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.941453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.941480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.941506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.941533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.941562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.941592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.941623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.941654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.941684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.941712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.941741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.941771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.941799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.941825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.941852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.147 [2024-07-12 10:48:36.941881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.941911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.941939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.941970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.942000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.942345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.942374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.942404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.942437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.942467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.942499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.942528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.942557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.942619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.942648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.942677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.942704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.942734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.942765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.942796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.942832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.942862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.942890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.942921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.942948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.942988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.943018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.943045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.943076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.943106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.943135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.943163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.943197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.943226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.943259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.943288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.943317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.943345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.943376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.943405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.943433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.943461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.943492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.943522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.943549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.943576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.943607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.943631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.943664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.943691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.943721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.943748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.943780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.943815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.943850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.943884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.943912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.943942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.943970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.943999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.944023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.944055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.944090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.944119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.944151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.944178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.944206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.944237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.944603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.944635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.944664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.944690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.944719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.944748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.944777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.944811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.944840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.944872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.944902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.944933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.944974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.945002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.945026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.945055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.945086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.945116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.945148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.945178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.945207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.945233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.945263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.945291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.945322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.945354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.945378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.945402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.945424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.945448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.945472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.945494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.945519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.945542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.945566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.945594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.945623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.945650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.148 [2024-07-12 10:48:36.945684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.945716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.945746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.945778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.945808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.945836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.945868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.945899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.945930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.945958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.945989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.946018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.946048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.946075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.946109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.946140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.946170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.946199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.946228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.946263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.946291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.946321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.946354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.946384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.946414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.946439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.946812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.946846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.946876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.946907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.946937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.946967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.947024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.947053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.947086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.947115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.947147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.947177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.947208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.947240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.947271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.947301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.947328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.947355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.947383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.947410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.947441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.947471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.947509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.947537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.947570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.947600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.947631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.947659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.947689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.947719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.947749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.947776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.947806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.947834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.947862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.947892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.947922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.947950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.947980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.948010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.948038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.948068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.948098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.948131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.948161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.948193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.948225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.948257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.948289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.948315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.948344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.948377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.948405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.948451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.948481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.948512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.948548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.948584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.948619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.948654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.948681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.948713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.948742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.949085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.949114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.949146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.949176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.949216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.949240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.949272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.949303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.949344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.949372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.949401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.949427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.949458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.949486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.949513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.949540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.949571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.149 [2024-07-12 10:48:36.949600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.949627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.949656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.949685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.949715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.949746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.949777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.949804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.949837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.949862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.949890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.949920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.949947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.949976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.950005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.950042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.950070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.950099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.950131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.950162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.950196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.950223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.950255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.950287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.950321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.950362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.950391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.950420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.950446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.950479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.950511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.950542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.950572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.950598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.950623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.950649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.950673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.950696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.950720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.950744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.950772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.950802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.950831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.950860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.950892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.950923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.950953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.951295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.951326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.951355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.951393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.951423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.951449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.951477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.951507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.951535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.951561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.951591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.951624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.951651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.951680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.951708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.951736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.951765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.951826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.951855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.951885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.951915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.951944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.951980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.952008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.952038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.952068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.952096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.952127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.952157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.952185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.952214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.952250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.952280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.952309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.952338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.952364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.952397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.952427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.952460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.952490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.952517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.952546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.952576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.952635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.150 [2024-07-12 10:48:36.952665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.952696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.952724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.952751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.952780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.952811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.952840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.952869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.952905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.952938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.952969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.953005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.953039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.953067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.953096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.953120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.953153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.953183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.953212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.953553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.953583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.953612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.953640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.953677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.953706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.953736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.953763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.953791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.953821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.953851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.953884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.953912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.953966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.953995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.954028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.954062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.954088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.954127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.954157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.954186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.954226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.954255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.954283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.954312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.954342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.954372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.954402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.954437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.954467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.954496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.954523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.954550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.954579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.954603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.954634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.954664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.954695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.954721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.954750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.954778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.954803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.954832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.954864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.954890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.954919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.954950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.954978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.955008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.955036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.955067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.955093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.955128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.955157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.955187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.955215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.955244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.955274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.955301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.955330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.955365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.955393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.955421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.955453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.955819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.955850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.955880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.955907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.955936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.955966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.955994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.956021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.956048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.956077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.956107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.956142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.956169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.956199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.956227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.956253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.956281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.956312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.956340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.956369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.956397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.956430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.956457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.151 [2024-07-12 10:48:36.956486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.956514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.956544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.956574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.956602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.956629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.956657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.956685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.956710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.956734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.956757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.956782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.956808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.956844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.956871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.956900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.956927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.956958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.956987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.957017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.957048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.957078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.957107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.957140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.957168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.957196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.957223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.957256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.957286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.957314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.957345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.957372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.957401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.957431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.957461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.957491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.957523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.957553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.957580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.957611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.957965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.958001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.958029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.958057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.958085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.958113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.958144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.958171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.958196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.958223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.958249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.958274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.958302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.958331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.958358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.958393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.958420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.958451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.958481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.958512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.958539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.958570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.958600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.958628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.958657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.958687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.958715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.958745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.958774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.958804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.958838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.958868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.958897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.958927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.958955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.958982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.959011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.959040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.959072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.959106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.959137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.959167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.959208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.959253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.959283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.959313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.959341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.959371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.959399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.959428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.959463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.959492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.959526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.959555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.959583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.959618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.959649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.959681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.959709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.959738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.959770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.959800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.959828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.959858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.960198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.960227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.152 [2024-07-12 10:48:36.960254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.960280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.960306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.960341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.960367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.960396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.960431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.960460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.960489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.960518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.960548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.960577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.960606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.960634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.960667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.960699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.960727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.960751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.960779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.960805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.960832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.960858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.960898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.960929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.960958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.960987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.961018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.961045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.961073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.961102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.961139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.961166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.961195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.961219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.961250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.961279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.961308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.961336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.961365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.961392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.961419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.961445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.961474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.961501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.961527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.961551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.961575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.961598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.961621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.961644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.961668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.961692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.961716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.961740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.961765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.961790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.961813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.961845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.961875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.961904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.961935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.962290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.962321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.962347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.962379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.962419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.962449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.962476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.962500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.962533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.962561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.962592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.962622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.962650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.962680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.962712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.962744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.962774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.962804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.962834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.962865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.962894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.962930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.962958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.963001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.963029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.963057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.963083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.963113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.963147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.963178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.963223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.963251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.963281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.963310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.963338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.963366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.963394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.963426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.963454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.963485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.963514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.963542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.963575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.963604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.963635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.153 [2024-07-12 10:48:36.963663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.963695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.963725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.963755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.963787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.963816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.963848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.963879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.963906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.963937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.963964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.963996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.964025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.964053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.964084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.964114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.964146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.964175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.964204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.964551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.964583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.964611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.964642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.964669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.964703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.964739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.964769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.964797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.964826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.964851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.964878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.964909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.964937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.964961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.964991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.965020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.965048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.965078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.965106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.965145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.965175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.965203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.965231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.965259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.965286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.965313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.965337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.965365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.965397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.965425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.965455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.965482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.965511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.965538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.965579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.965617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.965654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.965682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.965709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.965741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.965768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.965796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.965825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.965852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.965882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.965911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.965939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.965969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.965996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.966026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.966054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.966084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.966113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.966142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.966169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.966202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.966231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.966262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.966290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.966320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.966352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.966389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.966764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.966791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.966816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.966840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.966865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.966889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.966913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.966937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.966961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.966989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.154 [2024-07-12 10:48:36.967021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.967049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.967078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.967110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.967143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.967172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.967200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.967229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.967258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.967288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.967316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.967345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.967372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.967433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.967461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.967493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.967522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.967552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.967582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.967619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.967649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.967680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.967707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.967735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.967762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.967791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.967826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.967856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.967885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.967913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.967941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.967971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.968000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.968043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.968072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.968102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.968135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.968163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.968192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.968223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.968258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.968286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.968317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.968346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.968373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.968403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.968433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.968466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.968496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.968524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.968553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.968580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.968609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.968638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.969085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.969118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.969149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.969179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.969211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.969238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.969266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.969299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.969327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.969357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.969386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.969417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.969444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.969473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.969504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.969533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.969571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.969601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.969629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.969657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.969686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.969719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.969750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.969779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.969808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.969835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.969865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.969894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.969945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.969973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.970002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.970030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.970059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.970095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.970130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.970158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.970191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.970219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.970250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.970281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.970310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.970346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.970376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.970406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.970435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.970465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.970494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.970523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.970555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.970582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.970617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.155 [2024-07-12 10:48:36.970645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.970673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.970701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.970731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.970769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.970798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.970828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.970857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.970883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.970916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.970946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.970975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.971321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.971353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.971380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.971419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.971450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.971480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.971504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.971533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.971562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.971589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.971619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.971650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.971681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.971709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.971739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.971770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:20.156 [2024-07-12 10:48:36.971794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.971824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.971854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.971884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.971918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.971949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.971980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.972009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.972036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.972064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.972092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.972125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.972155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.972186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.972213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.972243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.972274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.972301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.972333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.972363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.972388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.972420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.972449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.972475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.972503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.972530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.972555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.972582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.972612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.972636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.972659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.972685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.972712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.972743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.972776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.972804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.972834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.972860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.972888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.972917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.972949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.972979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.973008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.973037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.973066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.973094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.973123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.973153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.973491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.973520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.973548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.973578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.973613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.973646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.973674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.973701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.973730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.973759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.973791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.973822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.973853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.973882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.973911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.973938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.973977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.974006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.974036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.974064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.974092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.974118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.974160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.974191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.974218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.974247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.974276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.974302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.974334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.156 [2024-07-12 10:48:36.974361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.974390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.974418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.974449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.974479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.974509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.974537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.974566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.974593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.974630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.974657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.974716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.974745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.974775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.974802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.974831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.974864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.974892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.974939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.974967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.974996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.975027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.975054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.975085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.975113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.975148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.975174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.975203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.975233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.975264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.975292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.975323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.975352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.975382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.975730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.975760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.975789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.975817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.975855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.975883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.975912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.975940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.975970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.976006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.976030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.976058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.976087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.976114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.976146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.976182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.976209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.976237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.976265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.976293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.976321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.976349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.976376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.976404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.976434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.976461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.976488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.976518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.976544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.976573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.976603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.976634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.976663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.976692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.976721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.976750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.976780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.976812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.976840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.976873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.976902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.976931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.976965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.976995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.977024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.977052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.977081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.977108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.977135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.977164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.977195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.977226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.977264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.977292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.977320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.977348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.977374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.977405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.977435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.977461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.977492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.977523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.977549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.977573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.977960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.977989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.978019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.978048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.978077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.978110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.978141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.978172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.157 [2024-07-12 10:48:36.978202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.978230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.978256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.978284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.978314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.978349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.978378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.978407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.978436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.978463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.978498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.978529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.978557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.978584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.978612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.978636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.978666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.978697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.978725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.978755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.978783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.978814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.978841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.978870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.978902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.978932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.978965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.978992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.979019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.979049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.979077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.979119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.979155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.979188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.979217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.979247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.979276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.979306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.979340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.979368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.979395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.979423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.979448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.979476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.979503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.979530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.979556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.979586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.979619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.979648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.979674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.979704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.979733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.979765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.979794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.980135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.980166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.980202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.980230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.980275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.980304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.980332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.980360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.980389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.980444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.980473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.980501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.980530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.980559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.980586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.980614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.980654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.980680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.980713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.980742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.980771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.980803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.980830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.980862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.980892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.980919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.980946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.980973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.981011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.981038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.981084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.981115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.981148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.981177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.981204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.981232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.981259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.981293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.981323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.981354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.981392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.981427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.981456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.981485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.981514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.981540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.981570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.981599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.981625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.981654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.981680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.158 [2024-07-12 10:48:36.981711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.981738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.981770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.981797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.981823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.981850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.981884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.981924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.981956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.981989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.982017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.982045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.982080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.982451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.982485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.982514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.982545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.982575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.982606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.982637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.982668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.982699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.982730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.982760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.982791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.982821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.982849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.982885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.982914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.982946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.982973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.982997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.983027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.983058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.983085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.983111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.983141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.983174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.983209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.983234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.983264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.983292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.983316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.983341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.983370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.983399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.983428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.983458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.983486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.983515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.983540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.983563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.983587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.983611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.983635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.983666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.983697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.983729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.983760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.983790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.983818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.983848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.983877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.983905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.983932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.983958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.983988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.984016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.984044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.984076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.984107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.984139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.984169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.984205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.984233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.984265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.984734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.984765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.984794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.984822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.984852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.984880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.984910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.984948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.984977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.985007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.985037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.985066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.985096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.985128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.985160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.159 [2024-07-12 10:48:36.985189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.985218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.985248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.985278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.985313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.985343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.985372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.985408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.985438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.985473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.985500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.985529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.985558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.985588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.985618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.985647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.985679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.985708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.985737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.985767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.985794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.985825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.985853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.985888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.985920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.985948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.985976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.986003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.986034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.986064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.986094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.986125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.986158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.986186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.986215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.986242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.986273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.986305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.986332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.986360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.986389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.986415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.986458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.986486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.986514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.986550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.986578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.986608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.986640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.986981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.987019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.987047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.987075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.987103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.987136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.987162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.987190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.987218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.987247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.987276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.987301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.987330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.987361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.987401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.987429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.987459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.987485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.987518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.987546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.987573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.987602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.987632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.987658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.987689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.987718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.987746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.987780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.987808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.987837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.987869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.987900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.987929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.987958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.987985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.988012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.988040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.988067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.988099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.988132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.988160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.988188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.988216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.988240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.988273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.988302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.988331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.988358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.988385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.988416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.988445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.988471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.988502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.988529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.988558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.988587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.988613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-07-12 10:48:36.988638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.988662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.988691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.988715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.988742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.988770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.989224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.989257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.989286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.989318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.989346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.989376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.989420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.989449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.989478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.989509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.989537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.989565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.989594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.989623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.989652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.989680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.989707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.989737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.989767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.989794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.989822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.989852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.989882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.989912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.989945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.989976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.990006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.990034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.990060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.990091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.990121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.990154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.990182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.990218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.990248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.990275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.990305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.990333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.990389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.990418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.990450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.990480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.990507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.990537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.990563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.990596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.990627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.990655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.990684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.990713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.990745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.990776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.990805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.990833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.990860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.990886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.990916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.990945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.990974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.991001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.991032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.991070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.991108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.991141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.991479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.991506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.991535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.991562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.991593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.991620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.991648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.991676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.991706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.991735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.991764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.991789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.991820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.991848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.991877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.991911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.991940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.991972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.992002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.992030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.992064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.992091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.992126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.992154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.992185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.992214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.992244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.992271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.992299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.992326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.992352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.992383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.992412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.992458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.992488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.992517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-07-12 10:48:36.992550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.992577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.992606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.992634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.992665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.992693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.992721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.992751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.992780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.992814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.992841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.992869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.992894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.992919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.992947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.992974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.993008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.993040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.993068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.993097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.993127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.993154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.993181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.993209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.993246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.993283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.993311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.993686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.993715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.993744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.993772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.993801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.993830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.993860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.993889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.993921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.993949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.993981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.994007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.994039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.994065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.994093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.994126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.994156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.994183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.994214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.994242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.994266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.994297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.994329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.994364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.994399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.994423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.994454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.994486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.994520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.994549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.994589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.994613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.994640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.994668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.994695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.994718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.994742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.994766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.994790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.994814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.994838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.994864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.994892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.994920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.994951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.994980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.995010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.995038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.995068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.995097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.995132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.995161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.995194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.995222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.995254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.995282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.995311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.995339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.995367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.995394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.995421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.995450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.995479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.995516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.995860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.995893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.995923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.995950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.995985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.996012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.996048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.996075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.996104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.996136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.996171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.996212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.996244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.996270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.996301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-07-12 10:48:36.996327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.996356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.996379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.996410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.996437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.996469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.996508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.996542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.996581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.996615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.996643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.996670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.996695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.996723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.996751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.996785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.996814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.996842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.996872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.996902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.996932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.996963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.996992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.997019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.997050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.997077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.997118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.997154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.997182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.997209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.997238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.997268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.997298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.997326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.997355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.997390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.997418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.997446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.997474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.997501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.997531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.997559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.997589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.997620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.997647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.997677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.997704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.997751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.998093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.998126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.998153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.998178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.998205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.998235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.998263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.998289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.998316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.998344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.998372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.998401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.998428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.998456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.998486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.998518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.998546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.998576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.998603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.998636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.998664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.998695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.998726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.998759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.998787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.998815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.998842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.998873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.998902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.998930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.998957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.998987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.999016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.999048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.999079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.999109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.999140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.999167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.999196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.999224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.999252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.999292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.999323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.999354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-07-12 10:48:36.999384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:36.999412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:36.999439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:36.999468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:36.999498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:36.999528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:36.999556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:36.999580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:36.999613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:36.999644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:36.999672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:36.999701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:36.999727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:36.999752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:36.999781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:36.999814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:36.999843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:36.999872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:36.999904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:36.999935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.000275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.000305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.000364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.000392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.000422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.000450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.000479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.000508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.000535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.000567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.000595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.000625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.000653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.000690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.000718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.000749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.000776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.000809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.000839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.000866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.000896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.000925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.000962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.000991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.001024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.001056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.001086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.001115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.001150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.001176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.001206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.001232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.001260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.001288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.001326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.001356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.001384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.001413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.001443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.001472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.001501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.001530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.001559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.001587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.001615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.001649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.001680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.001708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.001737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.001764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.001793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.001823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.001853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.001881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.001913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.001943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.001980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.002009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.002038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.002067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.002095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.002131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.002161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.002512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.002541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.002573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.002604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.002646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.002675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.002704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.002736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.002764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.002815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.002844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.002874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.002902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.002930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.002959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.002990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.003026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.003060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.003094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.003126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.003154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.003178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.003209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-07-12 10:48:37.003237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.003264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.003290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.003317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.003344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.003377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.003413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.003445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.003474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.003498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.003528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.003562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.003590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.003619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.003646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.003675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.003703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.003731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.003759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.003785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.003810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.003837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.003869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.003896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.003925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.003953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.003979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.004006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.004033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.004061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.004088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.004118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.004149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.004176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.004231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.004262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.004288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.004318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.004346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.004396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.004425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.004771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.004801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.004830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.004858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.004887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.004917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.004945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.004973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.005006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.005033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.005063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.005091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.005135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.005164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.005199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.005226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.005253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.005281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.005312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.005349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.005379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.005410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.005440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.005470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.005501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.005528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.005557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.005584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.005612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.005640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.005692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.005722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.005753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.005785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.005813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.005871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.005900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.005928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.005956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.005984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.006017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.006044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.006091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.006127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.006157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.006187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.006215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.006242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.006272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.006298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.006326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.006350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.006382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.006410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.006436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.006466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.006493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.006522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.006550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.006578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.006616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.006647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.006679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.007065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-07-12 10:48:37.007117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.007152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.007180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.007212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.007241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.007272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.007300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.007332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:20.166 [2024-07-12 10:48:37.007359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.007388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.007417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.007446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.007485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.007511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.007541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.007568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.007596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.007625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.007654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.007685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.007713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.007756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.007786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.007814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.007841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.007871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.007897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.007924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.007954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.007982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.008006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.008039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.008067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.008097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.008131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.008166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.008202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.008239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.008272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.008300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.008329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.008357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.008386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.008416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.008446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.008474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.008505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.008535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.008562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.008591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.008617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.008646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.008677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.008704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.008761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.008790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.008826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.008853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.008881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.008908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.008940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.008973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.009001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.009429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.009462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.009492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.009526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.009555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.009584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.009614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.009646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.009673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.009698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.009731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.009769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.009793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.009824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.009854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.009885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.009915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.009942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.009971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.010002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.010029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.010060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.010090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.010118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.010151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.010178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.010206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.010233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.010261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.010290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.010316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.010346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.010376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.010406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.010435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.010462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.010492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.010518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.010546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.010572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.010600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-07-12 10:48:37.010627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.010651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.010679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.010709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.010739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.010771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.010801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.010834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.010863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.010890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.010918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.010947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.010978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.011007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.011035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.011068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.011096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.011131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.011161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.011192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.011222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.011249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.011605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.011638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.011666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.011717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.011747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.011775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.011806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.011835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.011871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.011903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.011932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.011959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.011987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.012044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.012075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.012104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.012135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.012164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.012189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.012217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.012248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.012277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.012303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.012333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.012361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.012388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.012417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.012443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.012472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.012502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.012530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.012558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.012585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.012611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.012639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.012670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.012700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.012729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.012767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.012795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.012824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.012853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.012883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.012909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.012937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.012972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.013008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.013037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.013063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.013092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.013119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.013149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.013180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.013207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.013235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.013263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.013294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.013330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.013357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.013385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.013415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.013446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.013475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.013504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.013868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.013900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.013931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.013960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.013989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-07-12 10:48:37.014023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.014053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.014083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.014112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.014145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.014184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.014213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.014242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.014277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.014306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.014338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.014369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.014397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.014422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.014450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.014478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.014504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.014531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.014563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.014598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.014628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.014661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.014687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.014723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.014763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.014795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.014822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.014852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.014878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.014914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.014938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.014963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.014986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.015010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.015034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.015064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.015094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.015129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.015160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.015191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.015222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.015250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.015276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.015304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.015333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.015361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.015389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.015422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.015452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.015483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.015513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.015541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.015576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.015605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.015643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.015670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.015699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.015728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.016069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.016102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.016164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.016194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.016221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.016257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.016287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.016313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.016342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.016382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.016413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.016443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.016472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.016497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.016525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.016557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.016591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.016624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.016653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.016679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.016706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.016735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.016761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.016791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.016823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.016855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.016885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.016915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.016945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.016971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.017002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.017029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.017059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.017087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.017115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.017147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.017175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.017205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.017232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.017261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.017297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.017324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.017355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.017383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.017409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.017437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.017466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.017515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-07-12 10:48:37.017546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.017575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.017603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.017629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.017657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.017686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.017714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.017741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.017770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.017797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.017828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.017854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.017883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.017911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.017937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.017970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.018335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.018369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.018399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.018430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.018459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.018491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.018520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.018549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.018578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.018609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.018648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.018679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.018707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.018739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.018767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.018796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.018827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.018860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.018892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.018924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.018956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.018984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.019010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.019041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.019069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.019097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.019127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.019158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.019190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.019221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.019251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.019282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.019311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.019337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.019365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.019396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.019423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.019453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.019489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.019513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.019544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.019578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.019607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.019638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.019668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.019699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.019729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.019760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.019791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.019822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.019854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.019883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.019912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.019940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.019974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.020002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.020055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.020086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.020114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.020145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.020174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.020206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.020242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.020617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.020687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.020716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.020745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.020775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.020804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.020835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.020865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.020894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.020922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.020951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.020981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.021009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.021041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.021076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.021108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.021140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.021170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.021200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.021230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.021259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.021287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.021338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.021368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.021398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.021433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.021464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-07-12 10:48:37.021520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.021550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.021580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.021608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.021637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.021667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.021697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.021730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.021758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.021787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.021836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.021865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.021892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.021923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.021953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.021997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.022028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.022060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.022087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.022116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.022150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.022179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.022209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.022240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.022267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.022296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.022333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.022362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.022390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.022418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.022446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.022485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.022509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.022536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.022571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.022606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.022636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.022967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.023004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.023040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.023069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.023099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.023129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.023157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.023181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.023211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.023238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.023269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.023301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.023330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.023357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.023388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.023419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.023450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.023479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.023507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.023536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.023565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.023594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.023623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.023653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.023683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.023715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.023743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.023772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.023800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.023829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.023861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.023888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.023917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.023946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.023974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.024002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.024029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-07-12 10:48:37.024058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.024090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.024119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.024151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.024180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.024207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.024234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.024267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.024302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.024328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.024354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.024385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.024414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.024447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.024478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.024506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.024530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.024565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.024595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.024626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.024658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.024688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.024718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.024750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.024780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.024811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.025163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.025215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.025247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.025276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.025311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.025342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.025372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.025400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.025429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.025472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.025503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.025534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.025569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.025599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.025657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.025687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.025715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.025748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.025780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.025809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.025838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.025867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.025897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.025924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.025954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.025985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.026013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.026054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.026079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.026109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.026142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.026171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.026197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.026224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.026254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.026282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.026322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.026356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.026386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.026416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.026445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.026474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.026507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.026535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.026568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.026597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.026625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.026657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.026687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.026725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.026758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.026789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.026828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.026860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.026889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.026921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.026950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.026979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.027009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.027039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.027067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.027096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.027129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.027158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.027498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.027555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.027586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.027615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.027642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.027669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.027705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.027740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.027769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.027798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.027826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.027864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.027900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-07-12 10:48:37.027926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.027955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.027987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.028024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.028054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.028083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.028113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.028146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.028181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.028210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.028240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.028269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.028305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.028336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.028363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.028393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.028423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.028455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.028484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.028515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.028548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.028579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.028607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.028642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.028671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.028712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.028741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.028770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.028802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.028830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.028882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.028913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.028944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.028972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.029003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.029064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.029094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.029126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.029154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.029186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.029213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.029243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.029280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.029308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.029336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.029363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.029388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.029419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.029448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.029475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.029803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.029829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.029854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.029879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.029906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.029937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.029966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.029995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.030028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.030057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.030088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.030118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.030152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-07-12 10:48:37.030184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.030212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.030240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.030269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.030297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.030330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.030361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.030394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.030424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.030457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.030490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.030518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.030547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.030578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.030604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.030644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.030677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.030706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.030735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.030764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.030789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.030820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.030849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.030878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.030905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.030934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.030965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.030994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.031024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.031075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.031104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.031136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.031167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.031194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.031229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.031259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.031289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.031327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.031357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.031387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.031416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.031445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.031494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.031524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.031552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.031580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.031607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.031649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.031679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.031710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.031740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.032208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.032238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.032267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.032301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.032333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.032363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.032396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.032426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.032457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.032487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.032516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.032546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.032575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.032608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.032637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.032666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.032699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.032728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.032761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.032790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.032818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.032852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.032882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.032920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.032952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.032981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.033014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.033043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.033074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.033107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.033140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.033176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.033205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.033233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.033264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.033293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.033324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.033353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.033383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.033440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.033470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.033497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.033527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.033554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.033588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.033618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.033646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.033680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.033707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.033736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.033764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.033792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.033831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-07-12 10:48:37.033863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.033890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.033918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.033944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.033976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.034007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.034036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.034064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.034095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.034136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.034481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.034512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.034538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.034563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.034595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.034626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.034653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.034683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.034713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.034743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.034772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.034800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.034828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.034857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.034887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.034919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.034947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.034977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.035008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.035037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.035065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.035093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.035129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.035159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.035191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.035221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.035248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.035278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.035311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.035341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.035369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.035400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.035432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.035470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.035495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.035526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.035555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.035585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.035612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.035636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.035667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.035699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.035724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.035749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.035773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.035797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.035822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.035846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.035875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.035904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.035937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.035968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.036000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.036029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.036059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.036088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.036118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.036153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.036185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.036214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.036245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.036274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.036306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.036336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.036680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.036712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.036742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.036770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.036800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.036832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.036861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.036891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.036927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.036957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.036986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.037015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.037044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.037074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.037104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.037137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.037167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.037196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.037227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.037258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.037292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.037319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.037348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.037378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.037407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.037438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.037466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.037499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.037533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.037563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-07-12 10:48:37.037620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.037648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.037676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.037709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.037737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.037766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.037795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.037837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.037866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.037894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.037921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.037950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.037974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.038003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.038035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.038063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.038092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.038125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.038157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.038186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.038216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.038254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.038283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.038312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.038339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.038368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.038396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.038424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.038453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.038480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.038512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.038552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.038582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.039002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.039036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.039064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.039102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.039134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.039168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.039197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.039225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.039256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.039285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.039316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.039348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.039376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.039409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.039440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.039473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.039500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.039532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.039561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.039589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.039622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.039652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.039682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.039711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.039740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.039769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.039800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.039829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.039858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.039888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.039920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.039950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.039981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.040010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.040037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.040067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.040098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.040131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.040161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.040189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.040224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.040263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.040293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.040323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.040348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.040380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.040407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.040436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.040471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.040502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.040534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.040564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.040593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.040621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.040647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.040676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.040706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.040738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.040767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.040797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.040827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.040858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.040888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.040920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.041284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.041327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.041357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.041386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.041453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.041484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.041515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.041542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.041574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-07-12 10:48:37.041604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.041634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.041663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.041691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.041722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.041753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.041786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.041815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.041852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.041880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.041910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.041942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.041969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.041998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.042027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.042056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.042086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.042115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.042174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.042204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.042231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.042264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.042298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.042326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.042356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.042389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.042414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.042447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.042476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.042516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.042545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.042574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.042603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.042633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.042667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.042703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.042731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.042762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.042791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.042815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.042846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.042876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.042906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.042940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.042976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.043001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.043034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.043064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.043095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.043127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.043159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.043189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.043219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.043248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.043594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.043627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.043661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:20.176 [2024-07-12 10:48:37.043693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.043729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.043758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.043785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.043819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.043848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.043876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.043915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.043945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.043975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.044005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.044036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.044066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.044097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.044130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.044188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.044219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.044249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.044281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.044309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.044341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.044381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.044409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.044436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.044465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.044496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.044535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.044563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.044590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.044616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.044644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-07-12 10:48:37.044681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.044710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.044738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.044766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.044801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.044829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.044859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.044888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.044920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.044950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.044981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.045006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.045034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.045064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.045093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.045128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.045157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.045189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.045217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.045248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.045277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.045309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.045338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.045371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.045398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.045430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.045460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.045489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.045521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.045551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.045896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.045927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.045958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.045995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.046025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.046056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.046089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.046118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.046152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.046187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.046216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.046245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.046275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.046304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.046343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.046370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.046407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.046435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.046468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.046497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.046525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.046554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.046584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.046616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.046648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.046674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.046705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.046734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.046763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.046803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.046829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.046858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.046897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.046927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.046954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.046984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.047015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.047044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.047076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.047107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.047140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.047172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.047205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.047235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.047264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.047294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.047321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.047353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.047382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.047411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.047445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.047476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.047507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.047537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.047567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.047604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.047632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.047662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.047693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.047721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.047754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.047785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.047816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.048174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.048212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.048243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.048273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.048302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.048335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.048366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.048396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.048424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.048452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.048477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.048509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.048540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-07-12 10:48:37.048567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.048597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.048624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.048655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.048685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.048715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.048742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.048770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.048797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.048830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.048855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.048888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.048920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.048947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.048976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.049013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.049050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.049076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.049103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.049137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.049168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.049199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.049230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.049260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.049291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.049320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.049350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.049381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.049412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.049442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.049473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.049503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.049532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.049561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.049595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.049625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.049656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.049683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.049712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.049744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.049774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.049803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.049838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.049868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.049900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.049929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.049958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.050001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.050032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.050061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.050114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.050445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.050478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.050519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.050549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.050575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.050607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.050638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.050668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.050699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.050727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.050771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.050801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.050831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.050859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.050895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.050922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.050956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.050987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.051014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.051050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.051080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.051109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.051141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.051172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.051202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.051227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.051260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.051288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.051317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.051347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.051376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.051407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.051437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.051465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.051496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.051526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.051556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.051584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.051613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.051643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.051674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.051706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.051734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.051794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.051823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.051853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.051885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.051916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.051950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.051980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.052010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.052039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.052078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.052103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-07-12 10:48:37.052139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.052178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.052209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.052238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.052267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.052299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.052328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.052357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.052385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.052751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.052783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.052814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.052846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.052880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.052907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.052939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.052973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.053001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.053034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.053068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.053095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.053139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.053170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.053202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.053236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.053263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.053304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.053332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.053364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.053392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.053435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.053465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.053492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.053522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.053559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.053586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.053617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.053650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.053679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.053705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.053735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.053766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.053801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.053829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.053858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.053883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.053916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.053945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.053975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.054005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.054035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.054065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.054094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.054126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.054158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.054189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.054219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.054246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.054274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.054303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.054334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.054363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.054392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.054438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.054469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.054498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.054530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.054560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.054601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.054626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.054658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.054687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.054715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.055126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.055161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.055192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.055223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.055252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.055283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.055307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.055339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.055369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.055400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.055431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.055467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.055496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.055524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.055577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.055606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.055635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.055665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.055695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.055723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.055756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.055784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.055814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.055843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.055875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.055914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.055941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.055971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.056002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.056035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.056065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.056097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.056131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-07-12 10:48:37.056158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.056190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.056220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.056248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.056278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.056307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.056337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.056365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.056397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.056433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.056466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.056496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.056557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.056587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.056616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.056644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.056671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.056699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.056726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.056758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.056786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.056813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.056842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.056871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.056897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.056926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.056954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.056993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.057022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.057051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.057522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.057560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.057589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.057617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.057651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.057680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.057713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.057742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.057769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.057818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.057847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.057876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.057909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.057940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.057970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.058004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.058033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.058064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.058093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.058126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.058154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.058185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.058221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.058250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.058279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.058311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.058340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.058388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.058419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.058447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.058479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.058508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.058553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.058580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.058609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.058641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.058669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.058730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.058759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.058791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.058821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.058850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.058882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.058910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.058945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.058981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.059014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.059042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.059070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.059098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.059130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.059158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.059183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.059214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.059245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.059283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.059312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.059343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.059370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.059397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-07-12 10:48:37.059425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.059458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.059498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.059526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.059954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.059985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.060013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.060039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.060068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.060098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.060131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.060162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.060189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.060219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.060250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.060279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.060309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.060337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.060368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.060400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.060427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.060457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.060487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.060515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.060560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.060588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.060618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.060643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.060670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.060700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.060732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.060759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.060787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.060819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.060857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.060884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.060909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.060940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.060969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.060997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.061028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.061053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.061077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.061100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.061127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.061152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.061177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.061201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.061228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.061257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.061286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.061316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.061349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.061380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.061413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.061443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.061473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.061501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.061531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.061561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.061592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.061623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.061653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.061681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.061712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.061744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.061775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.062234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.062267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.062294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.062322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.062353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.062408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.062436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.062470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.062498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.062525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.062555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.062584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.062612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.062641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.062672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.062702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.062734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.062764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.062795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.062824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.062855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.062886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.062914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.062964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.062992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.063023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.063049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.063079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.063109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.063140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.063190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.063220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.063250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 true 00:10:20.181 [2024-07-12 10:48:37.063289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.063318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.063347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.063377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-07-12 10:48:37.063408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.063444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.063474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.063503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.063532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.063562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.063594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.063623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.063654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.063684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.063711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.063752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.063783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.063812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.063842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.063873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.063909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.063937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.063968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.064001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.064027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.064073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.064103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.064133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.064162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.064199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.064228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.064574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.064607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.064633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.064662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.064689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.064717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.064749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.064778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.064805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.064840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.064873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.064903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.064933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.064959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.064989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.065016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.065045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.065075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.065108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.065146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.065176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.065200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.065229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.065260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.065291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.065318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.065348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.065378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.065407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.065434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.065463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.065492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.065519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.065548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.065572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.065598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.065629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.065656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.065693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.065721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.065751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.065779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.065807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.065838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.065866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.065895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.065924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.065953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.065982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.066008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.066040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.066075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.066107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.066138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.066163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.066193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.066221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.066247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.066279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.066308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.066334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.066358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.066382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.066953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.066984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.067012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.067041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.067070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.067106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.067139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.067168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.067204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.067231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.067263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.067292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.067320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.067348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.067378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.067407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-07-12 10:48:37.067433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.067465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.067495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.067523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.067555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.067584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.067612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.067640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.067671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.067705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.067737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.067767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.067794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.067823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.067856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.067884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.067916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.067946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.067976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.068037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.068068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.068097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.068127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.068155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.068185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.068214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.068252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.068280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.068310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.068337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.068365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.068397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.068428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.068458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.068488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.068517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.068561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.068590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.068619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.068652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.068680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.068709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.068744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.068772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.068822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.068851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.068879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.068906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.069250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.069279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.069313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.069350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.069379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.069407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.069435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.069463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.069491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.069515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.069547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.069577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.069605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.069639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.069680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.069710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.069737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.069767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.069796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.069829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.069869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.069902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.069934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.069962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.069989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.070020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.070051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.070081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.070113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.070150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.070179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.070207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.070237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.070266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.070296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.070332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.070361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.070395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.070425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.070452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.070478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.070508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.070537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.070565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.070592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.070630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.070655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.070681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.070712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.070747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.070773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.070805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.070841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.070876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.070904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.070934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.070963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.071020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-07-12 10:48:37.071050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.071078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.071107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.071138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.071165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.071510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.071540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.071573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.071602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.071632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.071660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.071688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.071717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.071747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.071777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.071807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.071838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.071868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.071915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.071945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.071974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.072004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.072031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.072084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.072113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.072145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.072175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.072203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.072234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.072262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.072292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.072325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.072356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.072386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.072416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.072444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.072475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.072505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.072535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.072565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.072593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.072623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.072653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.072683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.072711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.072745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.072775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.072803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.072831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.072861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.072923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.072952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.072980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.073006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.073037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.073068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.073097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.073129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.073156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.073186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.073215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.073247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.073283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.073313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.073342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.073370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.073402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.073430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.073461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.073868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.073898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.073929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.073960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.073993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.074022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.074055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.074084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.074113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.074146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.074176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.074205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.074252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.074282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.074311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.074345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.074374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.074405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.074432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.074460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.074490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.074518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.074557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.074587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.074619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.074649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.074682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.074711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.074743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.074770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.074809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.074836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.074867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.074903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.074931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.074966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-07-12 10:48:37.074995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.075024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.075054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.075084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.075114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.075147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.075172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.075205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.075234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.075264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.075294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.075323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.075350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.075379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.075406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.075434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.075466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.075499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.075529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.075561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.075591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.075617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.075646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.075679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.075715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.075748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.075777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.076272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.076303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.076333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.076365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.076395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.076425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.076454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.076500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.076529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.076560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.076590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.076621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.076671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.076700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.076729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.076757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.076788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.076847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.076874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.076906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.076934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.076963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.077020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.077050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.077079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.077108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.077141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.077175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.077206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.077236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.077267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.077296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.077357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.077387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.077418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.077448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.077477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.077505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.077535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.077561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.077589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.077621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.077651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.077693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.077723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.077755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.077781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.077811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.077838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.077866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.077904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.077940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.077969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.077994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.078026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.078064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.078094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.078120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.078155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.078179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.078213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.078245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.078277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-07-12 10:48:37.078307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.078684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.078715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.078746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.078774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.078804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.078834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.078864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.078896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.078926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.078959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.078989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.079017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.079061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.079090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.079125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.079155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.079182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.079213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.079244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.079273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.079300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.079331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.079362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.079392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.079424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.079455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.079484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.079517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.079545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.079581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.079609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.079641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.079680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.079708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.079737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.079764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.079793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.079823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.079853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.079884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.079913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.079942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.079970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.079998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.080028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.080059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.080086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.080113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.080144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.080179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.080207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.080238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.080264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.080292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.080325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.080355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.080383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.080412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.080438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.080467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.080491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.080522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.080550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:20.186 [2024-07-12 10:48:37.080911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.080945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.080978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.081007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.081035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.081064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.081090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.081124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.081153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.081187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.081216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.081245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.081275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.081303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.081334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.081361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.081397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.081427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.081456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.081487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.081517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.081549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.081577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.081635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.081662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.081697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.081727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.081755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.081783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.081807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.081838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.081869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.081898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.081926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.081955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.081983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.082015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.082054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.082082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.082110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.082141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-07-12 10:48:37.082166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.082197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.082227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.082257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.082289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.082320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.082344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.082377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.082406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.082433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.082463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.082492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.082522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.082551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.082582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.082613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.082640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.082669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.082697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.082728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.082758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.082788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.082815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.083150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.083184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.083220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.083249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.083281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.083312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.083342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.083371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.083400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.083428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.083480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.083509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.083544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.083573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.083602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.083631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.083660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.083700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.083730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-07-12 10:48:37.083764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-07-12 10:48:37.083793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-07-12 10:48:37.083825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-07-12 10:48:37.083879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-07-12 10:48:37.083906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-07-12 10:48:37.083933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-07-12 10:48:37.083964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-07-12 10:48:37.083991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-07-12 10:48:37.084020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-07-12 10:48:37.084051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-07-12 10:48:37.084086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-07-12 10:48:37.084117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-07-12 10:48:37.084149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-07-12 10:48:37.084181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-07-12 10:48:37.084212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-07-12 10:48:37.084242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-07-12 10:48:37.084288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-07-12 10:48:37.084319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-07-12 10:48:37.084348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-07-12 10:48:37.084377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-07-12 10:48:37.084406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-07-12 10:48:37.084438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-07-12 10:48:37.084467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-07-12 10:48:37.084514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-07-12 10:48:37.084541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.084575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.084602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.084631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.084659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.084689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.084716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.084745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.084773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.084803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.084830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.084887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.084915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.084941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.084969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.084997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.085036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.085066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.085090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.085126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.085466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.085499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.085530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.085557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.085592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.085622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.085649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.085676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.085705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.085731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.085760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.085792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.085818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.085850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.085880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.085913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.085945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.085974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.086006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.086034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.086063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.086091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.086119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.086152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.086186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.086215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.086243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.086272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.086302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.086333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.086360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.086415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.086444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.086474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.086506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.086533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.086560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.086594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.086628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.086667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.086696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.086726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.086751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.086781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.086811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.086843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.086871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.086900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.086928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.086957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.086986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.087049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.087079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.087109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.087140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.087171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.087198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.087228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.087256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.087282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.087311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.087340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.087365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.087396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.087920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.087952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.087984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.088013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.088041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.088075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.088105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.088139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.088170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.088197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.088227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.088259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.088287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.088319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.088348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.088394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.088423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.088452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.088480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.088511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.088540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-07-12 10:48:37.088569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.088622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.088651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.088681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.088709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.088738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.088769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.088797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.088826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.088855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.088885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.088919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.088947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.088977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.089006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.089034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.089066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:20.469 [2024-07-12 10:48:37.089098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.089133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.089162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.089189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.089219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.089246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.089280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.089310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.089340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.089368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.089403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.089440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.089464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.089495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.469 [2024-07-12 10:48:37.089527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.089557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.089586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.089614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.089641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.089668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.089703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.089737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.089768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.089795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.089823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.090229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.090261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.090291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.090319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.090348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.090379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.090405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.090462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.090491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.090519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.090552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.090583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.090630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.090659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.090686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.090716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.090748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.090777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.090804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.090831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.090858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.090887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.090923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.090952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.090980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.091013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.091040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.091099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.091134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.091163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.091193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.091224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.091257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.091288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.091320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.091352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.091384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.091413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.091444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.091472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.091524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.091554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.091583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.091611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.091638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.091668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.091699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.091728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.091754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.091786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.091818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.091847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.091875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.091901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.091929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.091958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.091996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.092025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.092055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.092080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.092110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-07-12 10:48:37.092144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.092177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.092207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.092585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.092614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.092643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.092671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.092701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.092731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.092759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.092791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.092821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.092850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.092904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.092933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.092962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.092992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.093023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.093055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.093089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.093117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.093180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.093208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.093238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.093266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.093296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.093328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.093356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.093386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.093413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.093442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.093478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.093508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.093536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.093567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.093596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.093625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.093649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.093678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.093708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.093736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.093764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.093792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.093830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.093859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.093889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.093917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.093947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.093974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.094013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.094037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.094067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.094106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.094138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.094166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.094197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.094227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.094256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.094285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.094310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.094339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.094369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.094399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.094429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.094460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.094489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.094882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.094913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.094941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.094974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.095005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.095034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.095063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.095091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.095119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.095152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.095187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.095218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.095247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.095277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.095305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.095336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.095366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.095394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.095424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.095452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.095490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.095519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.095549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.095580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.095611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.095661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.095690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.095721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.095751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.095777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.095809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.095839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.095868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.095898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.095927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.095952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.095984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.096013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.096042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-07-12 10:48:37.096072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.096099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.096138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.096168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.096195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.096223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.096260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.096287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.096318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.096343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.096375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.096405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.096431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.096461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.096490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.096520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.096550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.096581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.096610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.096639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.096669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.096702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.096733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.096763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.096793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.097153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.097182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.097213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.097248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.097276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.097305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.097334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.097363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.097398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.097428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.097457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.097493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.097521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.097550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.097579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.097607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.097665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.097695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.097726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.097753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.097781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.097825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.097855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.097886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.097915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.097946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.097975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.098004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.098057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.098087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.098117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.098151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.098179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.098211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.098239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.098270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.098296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.098323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.098352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.098380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.098405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.098433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.098464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.098491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.098520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.098550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.098589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.098623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.098651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.098678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.098705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.098734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.098761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.098787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.098816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.098841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.098870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.098899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.098928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.098960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.098989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.099015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.099042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.099481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.099517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.099544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.099573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-07-12 10:48:37.099600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.099630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.099660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.099691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.099727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.099757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.099787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.099814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.099841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.099884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.099915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.099944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.099979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.100006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.100033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.100062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.100091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.100142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.100171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.100202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.100230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.100256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.100287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.100317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.100345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.100374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.100403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.100431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.100459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.100497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.100533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.100562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.100590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.100621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.100646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.100678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.100706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.100731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.100758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.100788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.100819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.100847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.100873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.100900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.100925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.100950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.100980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.101010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.101038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.101070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.101100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.101134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.101164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.101195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.101223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.101256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.101286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.101315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.101349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.101379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.101718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.101748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.101776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.101808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.101838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.101868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.101902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.101930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.101960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.101988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.102019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.102048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.102079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.102107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.102141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.102177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.102205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.102239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.102267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.102298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.102326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.102355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.102414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.102443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.102472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.102500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.102531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.102572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.102600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.102628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.102664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.102694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.102753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.102781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.102813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.102847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.102876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.102905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-07-12 10:48:37.102934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.102964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.102995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.103023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.103055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.103081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.103109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.103140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.103167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.103203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.103234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.103261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.103293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.103321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.103350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.103378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.103418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.103452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.103482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.103511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.103538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.103567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.103595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.103620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.103652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.103983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.104018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.104047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.104077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.104106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.104139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.104171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.104201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.104234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.104266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.104294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.104324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.104357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.104386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.104415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.104443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.104471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.104499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.104529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.104562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.104592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.104623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.104651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.104681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.104710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.104740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.104777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.104806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.104837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.104868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.104895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.104927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.104957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.104986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.105014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.105044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.105094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.105126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.105155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.105189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.105217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.105243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.105270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.105298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.105324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.105353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.105382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.105409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.105450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.105485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.105512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.105543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.105571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.105598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.105635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.105664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.105697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.105726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.105756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.105786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.105816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.105847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.105871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.105902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.106262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.106292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.106322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.106353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.106382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.106414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.106443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.106477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.106505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.106534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.106578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.106607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.106637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.106666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.106694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.106720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.106749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-07-12 10:48:37.106775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.106805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.106851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.106879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.106912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.106943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.106971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.106999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.107028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.107058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.107092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.107120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.107184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.107212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.107241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.107267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.107296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.107325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.107360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.107384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.107416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.107448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.107476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.107512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.107546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.107579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.107608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.107636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.107662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.107690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.107717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.107742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.107772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.107804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.107834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.107870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.107899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.107930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.107959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.107989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.108019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.108051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.108081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.108110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.108143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.108173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.108525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.108556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.108584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.108616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.108644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.108672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.108703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.108731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.108767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.108795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.108824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.108857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.108885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.108923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.108952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.108981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.109008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.109037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.109065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.109095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.109131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.109160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.109190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.109220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.109250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.109280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.109310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.109337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.109368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.109398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.109429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.109459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.109487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.109519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.109548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.109585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.109614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.109645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.109679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.109715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.109744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.109771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.109801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.109828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.109855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.109885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.109914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.109944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.109972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.110001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.110041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.110069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.110098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.110128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.110155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.110183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.110212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.110250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.110277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-07-12 10:48:37.110307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.110341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.110372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.110402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.110431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.110821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.110852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.110882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.110914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.110945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.110972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.111011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.111040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.111070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.111099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.111131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.111160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.111190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.111223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.111253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.111281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.111314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.111340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.111367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.111399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.111429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.111458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.111487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.111514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.111547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.111586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.111614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.111642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.111671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.111703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.111732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.111758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.111782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.111806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.111831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.111854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.111879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.111909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.111939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.111968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.111998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.112028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.112057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.112088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.112119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.112153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.112181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.112209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.112249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.112277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.112304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.112336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.112364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.112398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.112429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.112456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.112493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.112527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.112555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.112586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.112616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.112649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.112680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.113051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.113096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.113127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.113157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.113187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.113216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.113249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.113279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.113310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.113345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.113375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.113408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.113436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.113465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.113494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.113522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.113550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.113579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.113607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.113635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.113664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.113702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.113731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.113760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.113792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.113820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.113855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.113884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.113914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.113941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.113971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.114004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.114033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.114062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.114094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.114129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.114158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.114212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-07-12 10:48:37.114241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.114271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.114301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.114331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.114360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.114390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.114418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.114448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.114477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.114512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.114541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.114571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.114598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.114629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.114659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.114689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.114717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.114746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.114773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.114801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.114840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.114874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.114911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.114939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.114968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.114995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.115347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.115377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.115404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.115432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.115462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.115499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.115527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.115556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.115581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.115606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.115636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.115665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.115694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.115723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.115747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.115778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.115807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.115842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:20.476 [2024-07-12 10:48:37.115876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.115906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.115933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.115966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.115996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.116021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.116049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.116081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.116113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.116151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.116180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.116211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.116242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.116270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.116310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.116339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.116369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.116399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.116429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.116465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.116491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.116520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.116547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.116578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.116606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.116634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.116669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.116701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.116729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.116753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.116782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.116808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.116832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.116860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.116889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.116919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.116952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.116979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.117004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.117028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-07-12 10:48:37.117053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.117077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.117103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.117131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.117157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.117639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.117667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.117695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.117724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.117754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.117783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.117810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.117840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.117873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.117904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.117932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.117962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.117992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.118039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.118068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.118097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.118130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.118161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.118191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.118222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.118250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.118281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.118311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.118343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.118373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.118400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.118435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.118463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.118499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.118528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.118557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.118588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.118618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.118649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.118678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.118707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.118739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.118771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.118801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.118829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.118859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.118901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.118931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.118962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.118992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.119021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.119050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.119083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.119114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.119148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.119177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.119205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.119233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.119262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.119292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.119319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.119347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.119377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.119405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.119438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.119465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.119493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.119523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.119552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.119894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.119925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.119953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.119991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.120021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.120045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.120074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.120110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.120143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.120182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.120210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.120237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.120267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.120295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.120336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.120361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.120393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.120422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.120453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.120487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.120517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.120547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.120577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.120605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.120633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.120667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.120696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.120726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.120756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.120786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.120814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.120845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.120876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.120902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.120934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.120965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.120998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.477 [2024-07-12 10:48:37.121027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.121056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.121083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.121114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.121146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.121177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.121205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.121235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.121264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.121296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.121324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.121352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.121387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.121414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.121448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.121476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.121503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.121530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.121555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.121587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.121618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.121647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.121674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.121708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.121743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.121771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.122162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.122191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.122221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.122252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.122279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.122309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.122340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.122370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.122400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.122432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.122462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.122499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.122528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.122557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.122587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.122616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.122649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.122678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.122711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.122737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.122763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.122796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.122832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.122863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.122891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.122916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.122944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.122972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.123000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.123032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.123088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.123118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.123152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.123182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.123211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.123244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.123274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.123309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.123338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.123366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.123401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.123429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.123458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.123486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.123514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.123541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.123570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.123599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.123629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.123661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.123688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.123715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.123746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.123783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.123811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.123841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.123865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.123895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.123924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.123952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.123979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.124009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.124038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.124068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.124463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.124495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.124528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.124556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.124586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.124619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.124647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.124675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.124702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.124731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.124759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.124787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.124817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.124845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.124880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.124912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.478 [2024-07-12 10:48:37.124942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.124969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.124998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.125031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.125063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.125092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.125126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.125156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.125187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.125216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.125245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.125283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.125313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.125342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.125376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.125405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.125435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.125465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.125495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.125532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.125557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.125588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.125615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.125643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.125678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.125712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.125741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.125770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.125799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.125829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.125858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.125892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.125922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.125952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.125980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.126012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.126043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.126073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.126104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.126136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.126167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.126196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.126228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.126260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.126289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.126317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.126348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.126695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.126737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.126779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.126809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.126837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.126865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.126896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.126924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.126950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.126977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.127006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.127031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.127058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.127089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.127117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.127150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.127178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.127209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.127235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.127261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.127287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.127315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.127343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.127371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.127398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.127430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.127458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.127510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.127538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.127571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.127598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.127626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.127661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.127685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.127713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.127747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.127774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.127806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.127832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.127859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.127887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.127918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.127949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.127980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.128009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.128037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.128074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.128103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.128137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.128165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.128195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.128223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.128254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.128288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.128318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.128348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.128377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.128406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.128438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.479 [2024-07-12 10:48:37.128467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.128500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.128529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.128563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.128592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.128934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.128964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.128993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.129024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.129054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.129086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.129118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.129152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.129180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.129211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.129242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.129273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.129304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.129334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.129393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.129425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.129453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.129482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.129510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.129543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.129574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.129604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.129632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.129661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.129698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.129726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.129756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.129784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.129813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.129843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.129872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.129921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.129950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.129980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.130014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.130041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.130072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.130103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.130133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.130166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.130195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.130224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.130259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.130288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.130321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.130356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.130384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.130413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.130443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.130478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.130514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.130546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.130570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.130602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.130630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.130661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.130689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.130718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.130749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.130783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.130811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.130836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.130861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.131316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.131349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.131379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.131406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.131434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.131463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.131493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.131524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.131552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.131581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.131609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.131639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.131667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.131697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.131725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.131759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.131789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.131823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.131852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.131881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.131916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.131945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.480 [2024-07-12 10:48:37.131976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.132003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.132033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.132070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.132100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.132135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.132163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.132193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.132220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.132249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.132277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.132310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.132339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.132371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.132410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.132437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.132466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.132495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.132523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.132551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.132584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.132610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.132638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.132669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.132697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.132723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.132751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.132781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.132810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.132840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.132871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.132900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.132929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.132962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.132992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.133022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.133052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.133083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.133110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.133142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.133172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.133204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.133551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.133582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.133610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.133639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.133669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.133699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.133729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.133756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.133786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.133814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.133848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.133878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.133908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.133937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.133964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.133994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.134024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.134055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.134085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.134115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.134147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.134179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.134209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.134239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.134270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.134301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.134330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.134361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.134389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.134427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.134465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.134490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.134521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.134554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.134582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.134611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.134638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.134676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.134711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.134740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.134768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.134795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.134823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.134854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.134879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.134909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.134938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.134977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.135005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.135036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.135063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.135091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.135121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.135155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.135185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.135214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.135245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.135274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.135306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.135336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.135365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.135394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.135423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.135785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.481 [2024-07-12 10:48:37.135816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.135873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.135901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.135936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.135963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.135991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.136019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.136048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.136082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.136111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.136144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.136175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.136204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.136238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.136267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.136294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.136335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.136365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.136394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.136428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.136457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.136490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.136521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.136548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.136577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.136607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.136649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.136678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.136708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.136738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.136765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.136798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.136826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.136858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.136887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.136918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.136942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.136971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.137001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.137033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.137068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.137098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.137133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.137160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.137188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.137217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.137249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.137289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.137316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.137343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.137373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.137406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.137438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.137466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.137494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.137522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.137550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.137580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.137613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.137640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.137672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.137701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.137731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.138127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.138159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.138187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.138216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.138246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.138276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.138306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.138336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.138365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.138397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.138425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.138464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.138492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.138525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.138558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.138584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.138640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.138669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.138697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.138729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.138756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.138808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.138837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.138866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.138895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.138926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.138955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.138986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.139024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.139050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.139078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.139108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.139138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.139164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.139193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.139222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.139250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.139290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.139322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.139355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.139389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.139414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.139446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.482 [2024-07-12 10:48:37.139476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.139500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.139525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.139549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.139574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.139598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.139629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.139660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.139690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.139721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.139749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.139778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.139805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.139836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.139865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.139894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.139925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.139953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.139981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.140013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.140365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.140397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.140427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.140454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.140479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.140511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.140544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.140578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.140608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.140636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.140665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.140696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.140725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.140754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.140783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.140810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.140844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.140875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.140902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.140931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.140961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.141008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.141036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.141066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.141095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.141126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.141155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.141184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.141211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.141241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.141270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.141301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.141330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.141359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.141387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.141418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.141447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.141474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.141503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.141532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.141559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.141589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.141617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.141655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.141684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.141715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.141742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.141771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.141801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.141832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.141863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.141893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.141922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.141960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.141988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.142015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.142044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.142079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.142106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.142140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.142167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.142198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.142227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.142262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.142694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.142735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.142760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.142789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.142821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.142855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.142885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.142913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.142943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.142975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.143005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.143034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.143061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.143090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.143127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.143157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.143186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.143215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.143255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.143284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.483 [2024-07-12 10:48:37.143313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.143340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.143366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.143397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.143425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.143457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.143487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.143522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.143553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.143581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.143641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.143669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.143699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.143727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.143757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.143785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.143814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.143843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.143871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.143900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.143926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.143957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.143982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.144009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.144040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.144071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.144097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.144127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.144155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.144183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.144214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.144244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.144274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.144304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.144331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.144356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.144387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.144416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.144447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.144474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.144501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.144530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.144558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.145141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.145171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.145200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.145232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.145260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.145289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.145318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.145355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.145383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.145413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.145445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.145473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.145504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.145535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.145576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.145604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.145634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.145664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.145693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.145754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.145783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.145813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.145841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.145870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.145901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.145930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.145957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.145989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.146022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.146051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.146082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.146110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.146143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.146171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.146197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.146226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.146254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.146282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.146315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.146340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.146371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.146403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.146433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.146463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.146491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.146520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.146551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.146582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.484 [2024-07-12 10:48:37.146612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.146651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.146681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.146709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.146733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.146764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.146794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.146819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.146850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.146880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.146908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.146938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.146967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.147023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.147050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.147085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.147437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.147469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.147496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.147525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.147554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.147584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.147612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.147640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.147668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.147697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.147726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.147755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.147782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.147818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.147849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.147878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.147907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.147933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.147960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.147991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.148020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.148057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.148085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.148115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.148146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.148177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.148209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.148237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.148264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.148292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.148323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.148352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.148385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.148415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.148445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.148474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.148501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.148531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.148559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.148593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.148625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.148654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.148684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.148713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.148742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.148771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.148802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.148831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.148861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.148895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.148923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.148953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.148981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.149014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.149042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.149071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.149099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.149133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.149169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.149197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.149224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.149251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.149279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.149654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.149688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.149715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.149743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.149768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.149796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.149826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.149850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.149875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.149907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.149937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.149966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.149997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.150022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.150054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.150078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.150108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.150140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.150169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.150199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.150248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.150278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.150307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.150338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.150364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.150400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.150429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.485 [2024-07-12 10:48:37.150454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.150483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.150515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.150545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.150576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.150608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.150638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.150666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.150695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.150723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.150749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.150773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.150798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.150826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.150856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.150887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.150912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.150936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.150961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.150985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.151009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.151034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.151062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.151089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.151114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.151143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.151168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.151193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.151217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.151242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.151265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.151296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.151323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.151351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.151381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.151417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.151446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.151795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.151826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.151857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.151886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.151915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.151946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.151976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:20.486 [2024-07-12 10:48:37.152006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.152034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.152065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.152095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.152127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.152157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.152194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.152223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.152253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.152283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.152314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.152352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.152379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.152410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.152439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.152467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.152502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.152530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.152560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.152597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.152626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.152655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.152685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.152713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.152744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.152774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.152804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.152834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.152863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.152895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.152924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.152958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.152987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.153016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.153049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.153078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.153117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.153146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.153177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.153210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.153238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.153270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.153297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.153325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.153352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.153385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.153420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.153456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.153486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.153516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.153545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.153571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.153600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.153625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.153655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.153683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.154032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.154060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.154089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.154117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.154149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.486 [2024-07-12 10:48:37.154178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.154215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.154243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.154272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.154297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.154327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.154351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.154382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.154410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.154439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.154467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.154497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.154527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.154557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.154587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.154615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.154645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.154673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.154704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.154732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.154763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.154789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.154820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.154848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.154879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.154908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.154938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.154968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.155001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.155028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.155067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.155096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.155131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.155159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.155186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.155212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.155239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.155270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.155297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.155329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.155361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.155396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.155423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.155454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.155484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.155514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.155539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.155572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.155603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.155632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.155661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.155692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.155716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.155740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.155770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.155798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.155829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.155858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.155887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.156263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.156294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.156328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.156358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.156386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.156419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.156448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.156478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.156506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.156534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.156566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.156595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.156626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.156655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.156685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.156717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.156746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.156777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.156807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.156835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.156862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.156890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.156921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.156950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.156980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.157009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.157038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.157070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.157105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.157137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.157172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.157199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.157234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.157259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.157289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.157322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.157351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.157379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.157408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.157439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.157466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.157494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.157522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.157549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.157577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.157612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.487 [2024-07-12 10:48:37.157635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.157665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.157696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.157726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.157754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.157786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.157816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.157844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.157872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.157905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.157933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.157963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.157999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.158027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.158058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.158087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.158116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.158474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.158504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.158534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.158565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.158595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.158625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.158652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.158683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.158712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.158743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.158770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.158799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.158834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.158863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.158896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.158925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.158954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.158981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.159011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.159041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.159071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.159120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.159154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.159180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.159209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.159237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.159267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.159293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.159324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.159353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.159387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.159419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.159448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.159477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.159505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.159534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.159571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.159599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.159628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.159652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.159682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.159712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.159750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.159779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.159806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.159835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.159865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.159892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.159919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.159945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.159973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.159998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.160028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.160059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.160087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.160114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.160149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.160180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.160208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.160236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.160264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.160289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.160318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.160352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.160737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.160774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.160803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.160831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.160863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.160892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.160931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.160961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.160990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.161018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.161047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.488 [2024-07-12 10:48:37.161071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.161103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.161136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.161167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.161195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.161223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.161250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.161279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.161307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.161335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.161368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.161397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.161424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.161453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.161482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.161506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.161532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.161560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.161585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.161610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.161634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.161659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.161684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.161709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.161734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.161758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.161787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.161817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.161846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.161874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.161903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.161934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.161965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.161995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.162024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.162055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.162084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.162117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.162149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.162185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.162214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.162248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.162276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.162308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.162337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.162367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.162399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.162429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.162472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.162498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.162526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.162554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.162921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.162956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.162985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.163018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.163050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.163082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.163112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.163144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.163179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.163207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.163235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.163267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.163294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.163350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.163379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.163410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.163438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.163467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.163496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.163522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.163551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.163579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.163608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.163636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.163670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.163701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.163727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.163755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.163784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.163813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.163841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.163906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.163934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.163963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.163993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.164021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.164047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.164079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.164110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.164144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.164171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.164201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.164231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.164259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.164286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.164318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.164347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.164375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.164407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.164435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.164486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.164514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.164542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.489 [2024-07-12 10:48:37.164575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.164604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.164632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.164660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.164691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.164717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.164746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.164781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.164821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.164850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.164878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.165212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.165242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.165272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.165300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.165326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.165355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.165393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.165425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.165455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.165484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.165511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.165537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.165568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.165600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.165630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.165660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.165687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.165717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.165747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.165779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.165808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.165836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.165866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.165896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.165926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.165957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.165985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.166013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.166043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.166072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.166109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.166141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.166170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.166200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.166231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.166289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.166319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.166343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.166376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.166407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.166437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.166466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.166494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.166527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.166558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.166585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.166609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.166640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.166668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.166697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.166724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.166750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.166774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.166798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.166825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.166854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.166897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.166922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.166946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.166970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.166995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.167019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.167045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.167484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.167516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.167544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.167587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.167616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.167645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.167675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.167704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.167738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.167766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.167797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.167825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.167852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.167882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.167910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.167951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.167982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.168010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.168041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.168072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.168110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.168140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.168172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.168200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.168229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.168260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.168289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.168343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.168373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.168403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.168430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.168460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.490 [2024-07-12 10:48:37.168491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.168520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.168548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.168576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.168604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.168635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.168664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.168697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.168729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.168760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.168797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.168826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.168856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.168884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.168912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.168948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.168978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.169005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.169033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.169064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.169095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.169131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.169161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.169187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.169222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.169251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.169275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.169304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.169336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.169365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.169396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.169432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.169774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.169804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.169833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.169859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.169890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.169919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.169943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.169973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.169999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.170028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.170058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.170087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.170115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.170148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.170176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.170207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.170236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.170270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.170301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.170329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.170361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.170388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.170424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.170452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.170484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.170513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.170542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.170577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.170604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.170635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.170666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.170693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.170725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.170752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.170802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.170832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.170861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.170891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.170916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.170953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.170984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.171041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.171069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.171100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.171138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.171167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.171195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.171220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.171249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.171281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.171308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.171336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.171366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.171398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.171428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.171458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.171485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.171513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.171541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.171574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.171604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.171634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.171663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.172112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.172150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.172179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.172209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.172236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.172265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.172293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.172321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.172352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.172381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.172412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.491 [2024-07-12 10:48:37.172448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.172479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.172510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.172541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.172568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.172599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.172629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.172660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.172688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.172717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.172748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.172776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.172812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.172840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.172870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.172903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.172931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.172966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.172994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.173023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.173054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.173084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.173131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.173157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.173188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.173217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.173245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.173276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.173303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.173332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.173365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.173401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.173428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.173458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.173492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.173531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.173564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.173593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.173619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.173649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.173677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.173712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.173742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.173772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.173806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.173836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.173868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.173899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.173929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.173959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.173989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.174017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.174047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.174407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.174436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.174463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.174512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.174541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.174569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.174596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.174627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.174674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.174702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.174732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.174766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.174795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.174828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.174865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.174896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.174943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.174971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.175002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.175032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.175061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.175093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.175126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.175178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.175208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.175238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.175268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.175296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.175349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.175380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.175408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.175441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.175472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.175500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.175526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.175555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.175583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.175610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.175640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.175666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.175697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.175729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.175757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.175795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.175833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.175862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.175891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.175919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.175947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.175978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.176011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.176042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.492 [2024-07-12 10:48:37.176069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.176100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.176133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.176162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.176197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.176232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.176260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.176289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.176317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.176343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.176373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.176850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.176880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.176912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.176940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.176971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.177000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.177029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.177057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.177086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.177120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.177152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.177205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.177234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.177262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.177292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.177323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.177355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.177382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.177410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.177440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.177468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.177529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.177559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.177591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.177621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.177648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.177684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.177714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.177760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.177789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.177813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.177846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.177878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.177907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.177932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.177960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.177986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.178015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.178044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.178073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.178104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.178141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.178171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.178199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.178228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.178255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.178285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.178312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.178341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.178369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.178396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.178420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.178444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.178473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.178501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.178531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.178560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.178588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.178618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.178647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.178678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.178705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.178735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.178763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.179096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.179129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.179165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.179193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.179221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.179252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.179280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.179310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.179338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.179369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.179399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.179428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.493 [2024-07-12 10:48:37.179456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.179483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.179519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.179551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.179579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.179616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.179646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.179677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.179706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.179734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.179770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.179798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.179828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.179859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.179888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.179918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.179946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.179979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.180008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.180040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.180071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.180101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.180136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.180166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.180198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.180227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.180255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.180285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.180313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.180344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.180373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.180405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.180434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.180463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.180504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.180529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.180561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.180594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.180622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.180650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.180677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.180705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.180731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.180759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.180787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.180825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.180853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.180884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.180909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.180940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.180972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.181323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.181355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.181381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.181406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.181437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.181469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.181498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.181527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.181558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.181588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.181618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.181645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.181674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.181706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.181734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.181764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.181791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.181821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.181854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.181882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.181914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.181948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.181976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.182026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.182058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.182088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.182114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.182149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.182177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.182206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.182235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.182264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.182294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.182323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.182352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.182384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.182412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.182440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.182472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.182499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.182527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.182556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.182588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.182617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.182646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.182675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.182701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.182730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.182763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.182793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.182821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.182850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.182879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.182916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.182949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.494 [2024-07-12 10:48:37.182977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.183008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.183035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.183063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.183088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.183115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.183149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.183180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.183206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.183586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.183618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.183650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.183681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.183712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.183743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.183772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.183808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.183838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.183867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.183895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.183923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.183951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.183980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.184031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.184060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.184091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.184119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.184153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.184182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.184212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.184243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.184275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.184304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.184334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.184362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.184391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.184421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.184448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.184477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.184511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.184541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.184569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.184606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.184648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.184675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.184706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.184731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.184755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.184784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.184813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.184840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.184871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.184901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.184929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.184958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.184988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.185015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.185044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.185090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.185119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.185153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.185182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.185211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.185242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.185274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.185302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.185331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.185359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.185410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.185438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.185469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.185499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.185844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.185874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.185914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.185946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.185976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.186003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.186038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.186074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.186102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.186133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.186162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.186189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.186218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.186257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.186287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.186312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.186340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.186365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.186395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.186423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.186458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.186489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.186518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.186544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.186574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.186601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.186629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.186658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.186688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.186719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.186748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.186777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.186808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.495 [2024-07-12 10:48:37.186835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.186863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.186892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.186922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.186955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.186983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.187037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.187067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.187095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.187126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.187154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.187191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.187221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.187248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.187282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.187311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.187365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.187394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.187423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.187452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.187483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.187521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.187551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.187581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.187611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.187640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.187670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.187697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.187725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.187756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.187783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.188146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.188175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.188206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.188235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.188272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.188301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.188331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.188367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.188391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.188426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.188457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.188485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.188513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.188543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:20.496 [2024-07-12 10:48:37.188569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.188597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.188626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.188654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.188683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.188716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.188750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.188776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.188806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.188860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.188888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.188920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.188953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.188983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.189011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.189038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.189066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.189093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.189133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.189161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.189186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.189219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.189250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.189276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.189304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.189337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.189364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.189393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.189422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.189450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.189482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.189512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.189541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.189569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.189601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.189629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.189657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.189687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.189714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.189744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.189774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.189806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.189836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.189866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.189897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.189927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.189955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.189984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.190013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.190386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.190416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.496 [2024-07-12 10:48:37.190448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.190478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.190516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.190547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.190577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.190607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.190637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.190666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.190693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.190725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.190755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.190786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.190815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.190852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.190883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.190912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.190942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.190973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.190999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.191031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.191061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.191089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.191117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.191150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.191194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.191224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.191254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.191284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.191315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.191346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.191376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.191405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.191466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.191510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.191539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.191578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.191608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.191639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.191667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.191696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.191724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.191753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.191785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.191814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.191845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.191883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.191912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.191947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.191975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.192002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.192031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.192059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.192084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.192115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.192148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.192178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.192207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.192237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.192267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.192307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.192336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.192367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.192737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.192769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.192795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.192828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.192858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.192888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.192928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.192966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.192993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.193021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.193048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.193079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.193110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.193144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.193170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.193201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.193234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.193264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.193294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.193333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.193363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.193393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.193422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.193451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.193479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.193507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.193538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.497 [2024-07-12 10:48:37.193568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.498 [2024-07-12 10:48:37.193597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.498 [2024-07-12 10:48:37.193629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.498 [2024-07-12 10:48:37.193659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.498 [2024-07-12 10:48:37.193688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.498 [2024-07-12 10:48:37.193723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.498 [2024-07-12 10:48:37.193750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.498 [2024-07-12 10:48:37.193779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.498 [2024-07-12 10:48:37.193808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.498 [2024-07-12 10:48:37.193839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.498 [2024-07-12 10:48:37.193869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.498 [2024-07-12 10:48:37.193900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.498 [2024-07-12 10:48:37.193949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.498 [2024-07-12 10:48:37.193980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.498 [2024-07-12 10:48:37.194011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.498 [2024-07-12 10:48:37.194046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.498 [2024-07-12 10:48:37.194076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.498 [2024-07-12 10:48:37.194106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.498 [2024-07-12 10:48:37.194138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.498 [2024-07-12 10:48:37.194167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.498 [2024-07-12 10:48:37.194194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.498 [2024-07-12 10:48:37.194218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.498 [2024-07-12 10:48:37.194246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.498 [2024-07-12 10:48:37.194281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.498 [2024-07-12 10:48:37.194309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.498 [2024-07-12 10:48:37.194338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.498 [2024-07-12 10:48:37.194368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.498 [2024-07-12 10:48:37.194395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.498 [2024-07-12 10:48:37.194423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.498 [2024-07-12 10:48:37.194460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.498 [2024-07-12 10:48:37.194490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.498 [2024-07-12 10:48:37.194519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.498 [2024-07-12 10:48:37.194544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.498 [2024-07-12 10:48:37.194573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.498 [2024-07-12 10:48:37.194604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.498 [2024-07-12 10:48:37.194633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.498 [2024-07-12 10:48:37.195064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:21.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.441 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.441 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:21.441 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:21.702 true 00:10:21.702 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:21.702 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.688 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.688 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:22.688 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:22.948 true 00:10:22.948 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:22.948 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.207 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.207 10:48:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:23.207 10:48:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:23.467 true 00:10:23.467 10:48:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:23.467 10:48:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.467 10:48:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.727 10:48:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:23.727 10:48:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:23.988 true 00:10:23.988 10:48:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:23.988 10:48:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.988 10:48:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.252 10:48:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:24.252 10:48:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:24.514 true 00:10:24.514 10:48:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:24.514 10:48:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.514 10:48:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.774 10:48:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:24.774 10:48:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:25.036 true 00:10:25.036 10:48:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:25.036 10:48:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.036 10:48:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.296 10:48:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:25.297 10:48:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:25.557 true 00:10:25.557 10:48:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:25.557 10:48:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.557 10:48:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.817 10:48:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:25.817 10:48:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:25.817 true 00:10:26.077 10:48:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:26.077 10:48:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.077 10:48:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.338 10:48:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:26.338 10:48:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:26.338 true 00:10:26.338 10:48:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:26.338 10:48:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.598 10:48:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.857 10:48:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:26.857 10:48:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:26.857 true 00:10:26.857 10:48:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:26.857 10:48:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.117 10:48:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.377 10:48:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:27.377 10:48:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:27.377 true 00:10:27.377 10:48:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:27.377 10:48:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.637 10:48:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.897 10:48:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:27.897 10:48:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:27.897 true 00:10:27.897 10:48:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:27.897 10:48:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.157 10:48:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.418 10:48:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:10:28.418 10:48:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:28.418 true 00:10:28.418 10:48:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:28.418 10:48:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.679 10:48:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.679 10:48:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:10:28.679 10:48:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:10:28.940 true 00:10:28.940 10:48:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:28.940 10:48:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.882 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.882 10:48:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.882 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.882 10:48:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:10:29.882 10:48:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:10:30.143 true 00:10:30.143 10:48:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:30.143 10:48:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.405 10:48:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.405 10:48:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:10:30.405 10:48:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:10:30.666 true 00:10:30.666 10:48:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:30.666 10:48:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.927 10:48:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.927 10:48:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:10:30.927 10:48:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:10:31.188 true 00:10:31.188 10:48:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:31.188 10:48:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.449 10:48:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.449 10:48:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:10:31.449 10:48:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:10:31.710 true 00:10:31.710 10:48:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:31.710 10:48:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.710 10:48:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.971 10:48:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:10:31.971 10:48:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:10:32.231 true 00:10:32.231 10:48:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:32.231 10:48:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.231 10:48:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.492 10:48:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:10:32.492 10:48:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:10:32.751 true 00:10:32.751 10:48:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:32.751 10:48:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.751 10:48:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.011 10:48:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:10:33.011 10:48:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:10:33.271 true 00:10:33.271 10:48:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:33.271 10:48:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.271 10:48:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.531 10:48:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:10:33.531 10:48:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:10:33.531 true 00:10:33.792 10:48:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:33.792 10:48:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.792 10:48:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.052 10:48:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:10:34.052 10:48:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:10:34.052 true 00:10:34.312 10:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:34.312 10:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.312 10:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.572 10:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:10:34.572 10:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:10:34.572 true 00:10:34.572 10:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:34.572 10:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.832 10:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.092 10:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:10:35.092 10:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:10:35.092 true 00:10:35.092 10:48:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:35.092 10:48:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.352 10:48:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.612 10:48:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:10:35.612 10:48:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:10:35.612 true 00:10:35.612 10:48:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:35.612 10:48:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.872 10:48:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.132 10:48:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:10:36.132 10:48:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:10:36.132 true 00:10:36.132 10:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:36.132 10:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.391 10:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.651 10:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:10:36.651 10:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:10:36.651 true 00:10:36.651 10:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:36.651 10:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.910 10:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.170 10:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:10:37.170 10:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:10:37.170 true 00:10:37.170 10:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:37.170 10:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.430 10:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.430 10:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:10:37.430 10:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:10:37.690 true 00:10:37.690 10:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:37.690 10:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.950 Initializing NVMe Controllers 00:10:37.950 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:37.950 Controller IO queue size 128, less than required. 00:10:37.950 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:37.950 Controller IO queue size 128, less than required. 00:10:37.950 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:37.950 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:37.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:37.951 Initialization complete. Launching workers. 00:10:37.951 ======================================================== 00:10:37.951 Latency(us) 00:10:37.951 Device Information : IOPS MiB/s Average min max 00:10:37.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1914.19 0.93 22234.07 1604.65 1053008.60 00:10:37.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10261.22 5.01 12474.78 1947.50 400002.76 00:10:37.951 ======================================================== 00:10:37.951 Total : 12175.42 5.95 14009.12 1604.65 1053008.60 00:10:37.951 00:10:37.951 10:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.951 10:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:10:37.951 10:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:10:38.210 true 00:10:38.210 10:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1963548 00:10:38.210 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1963548) - No such process 00:10:38.210 10:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1963548 00:10:38.210 10:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.471 10:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:38.471 10:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:38.471 10:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:38.471 10:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:38.471 10:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:38.471 10:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:38.732 null0 00:10:38.732 10:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:38.732 10:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:38.732 10:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:38.732 null1 00:10:38.992 10:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:38.992 10:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:38.992 10:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:38.992 null2 00:10:38.992 10:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:38.992 10:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:38.992 10:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:39.253 null3 00:10:39.253 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:39.253 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:39.253 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:39.253 null4 00:10:39.253 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:39.253 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:39.253 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:39.513 null5 00:10:39.513 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:39.513 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:39.513 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:39.774 null6 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:39.774 null7 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1970054 1970055 1970057 1970059 1970061 1970063 1970065 1970067 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.774 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:40.036 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:40.036 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:40.036 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:40.036 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:40.036 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:40.036 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.036 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:40.036 10:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:40.298 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.298 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.298 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:40.298 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.298 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.298 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:40.298 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.298 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.298 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:40.298 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.298 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.298 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:40.298 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.298 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.298 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:40.298 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.298 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.298 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:40.298 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.298 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.298 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:40.298 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.298 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.298 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:40.298 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:40.298 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:40.298 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:40.298 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.298 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:40.298 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:40.559 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:40.559 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:40.559 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.559 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.559 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:40.559 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.559 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.559 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:40.559 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.559 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.559 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:40.559 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.559 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.559 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:40.559 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.559 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.559 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:40.559 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.559 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.559 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:40.559 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.559 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.559 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:40.560 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.560 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.560 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:40.560 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:40.820 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:40.820 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:40.820 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:40.820 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.820 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:40.820 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:40.820 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:40.820 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.820 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.820 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.820 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:40.820 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.820 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:40.820 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.820 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.820 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:40.820 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.820 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.821 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:41.081 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.081 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.081 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:41.081 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.081 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.081 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:41.081 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.081 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.081 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:41.081 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.081 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.081 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:41.081 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:41.081 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:41.081 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:41.081 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:41.081 10:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:41.081 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:41.081 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:41.081 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.081 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.081 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.081 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.082 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:41.082 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:41.082 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.342 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.342 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.342 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:41.342 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.342 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.342 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:41.342 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.342 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.342 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:41.342 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.342 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.342 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:41.342 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:41.342 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:41.342 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.342 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.342 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:41.342 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:41.342 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.342 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.342 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:41.342 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:41.342 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:41.602 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.602 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.602 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:41.602 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:41.602 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.602 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.602 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:41.602 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.602 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.602 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:41.602 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:41.602 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.602 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.602 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:41.602 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.602 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.602 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.602 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:41.602 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:41.602 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:41.603 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:41.603 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:41.603 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.603 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.603 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:41.603 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.603 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.603 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:41.603 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.603 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.603 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:41.863 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:41.863 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.863 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.863 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:41.863 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.863 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.863 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:41.863 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.863 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.863 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:41.863 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.863 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.863 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.863 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:41.863 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:41.863 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:41.863 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.863 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.863 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:42.123 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:42.123 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.123 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.123 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:42.123 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:42.123 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:42.123 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.123 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.123 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:42.123 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:42.123 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.123 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.123 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:42.123 10:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:42.123 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.123 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.123 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.123 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:42.123 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.123 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.123 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:42.123 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.123 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.123 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:42.123 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:42.123 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.123 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.123 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:42.123 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:42.383 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.383 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.383 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:42.383 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:42.383 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.383 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.383 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:42.383 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:42.383 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:42.383 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.383 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.383 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:42.384 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:42.384 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.384 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.384 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:42.384 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:42.384 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.384 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.384 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.384 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:42.643 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.643 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.643 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:42.643 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.643 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.643 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:42.643 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:42.643 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.644 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.644 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:42.644 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:42.644 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:42.644 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.644 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.644 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:42.644 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.644 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.644 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:42.644 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:42.644 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:42.644 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:42.644 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.644 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.903 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:42.903 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.903 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.903 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:42.903 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.903 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.903 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:42.903 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.903 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:42.903 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.903 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.903 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:42.903 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.903 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.903 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:42.903 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:42.903 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:42.903 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.903 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.903 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:42.903 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.903 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.903 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:42.903 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.903 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.903 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:42.903 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:43.163 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:43.163 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:43.163 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.163 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.164 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.164 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:43.164 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.164 10:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.164 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:43.164 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:43.164 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.164 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.164 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:43.164 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.164 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.164 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.164 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.164 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.164 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.424 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.424 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.424 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.424 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.424 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:43.424 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:43.424 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.424 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.684 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.684 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.684 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:43.684 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:43.684 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:43.684 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:43.684 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:43.685 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:43.685 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:43.685 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:43.685 rmmod nvme_tcp 00:10:43.685 rmmod nvme_fabrics 00:10:43.685 rmmod nvme_keyring 00:10:43.685 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:43.685 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:43.685 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:43.685 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1962893 ']' 00:10:43.685 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1962893 00:10:43.685 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 1962893 ']' 00:10:43.685 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 1962893 00:10:43.685 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:10:43.685 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:43.685 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1962893 00:10:43.685 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:43.685 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:43.685 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1962893' 00:10:43.685 killing process with pid 1962893 00:10:43.685 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 1962893 00:10:43.685 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 1962893 00:10:43.685 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:43.685 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:43.685 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:43.685 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:43.685 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:43.685 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.685 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:43.685 10:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.238 10:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:46.239 00:10:46.239 real 0m48.264s 00:10:46.239 user 3m11.256s 00:10:46.239 sys 0m15.917s 00:10:46.239 10:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:46.239 10:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:46.239 ************************************ 00:10:46.239 END TEST nvmf_ns_hotplug_stress 00:10:46.239 ************************************ 00:10:46.239 10:49:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:46.239 10:49:02 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:46.239 10:49:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:46.239 10:49:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:46.239 10:49:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:46.239 ************************************ 00:10:46.239 START TEST nvmf_connect_stress 00:10:46.239 ************************************ 00:10:46.239 10:49:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:46.239 * Looking for test storage... 00:10:46.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:46.239 10:49:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:46.239 10:49:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:46.239 10:49:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.239 10:49:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.239 10:49:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.239 10:49:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.239 10:49:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.239 10:49:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.239 10:49:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.239 10:49:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.239 10:49:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.239 10:49:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.239 10:49:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:46.239 10:49:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:46.239 10:49:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.239 10:49:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.239 10:49:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:46.239 10:49:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.239 10:49:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:46.239 10:49:02 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.239 10:49:02 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.239 10:49:02 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.239 10:49:02 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.239 10:49:02 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.239 10:49:02 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.239 10:49:02 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:46.240 10:49:02 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.240 10:49:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:46.240 10:49:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:46.240 10:49:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:46.240 10:49:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.240 10:49:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.240 10:49:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.240 10:49:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:46.240 10:49:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:46.240 10:49:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:46.240 10:49:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:46.240 10:49:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:46.240 10:49:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:46.240 10:49:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:46.240 10:49:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:46.240 10:49:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:46.240 10:49:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.240 10:49:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:46.240 10:49:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.240 10:49:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:46.241 10:49:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:46.241 10:49:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:46.242 10:49:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:52.829 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:52.829 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:52.830 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:52.830 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:52.830 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:52.830 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:53.090 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:53.090 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:53.090 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:53.090 10:49:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:53.090 10:49:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:53.090 10:49:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:53.090 10:49:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:53.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:53.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.475 ms 00:10:53.090 00:10:53.090 --- 10.0.0.2 ping statistics --- 00:10:53.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.090 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:10:53.090 10:49:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:53.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:53.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:10:53.090 00:10:53.090 --- 10.0.0.1 ping statistics --- 00:10:53.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.090 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:10:53.090 10:49:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:53.090 10:49:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:53.090 10:49:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:53.090 10:49:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:53.090 10:49:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:53.090 10:49:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:53.090 10:49:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:53.090 10:49:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:53.090 10:49:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:53.350 10:49:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:53.350 10:49:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:53.350 10:49:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:53.350 10:49:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.350 10:49:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1975215 00:10:53.350 10:49:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1975215 00:10:53.350 10:49:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:53.350 10:49:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 1975215 ']' 00:10:53.350 10:49:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.350 10:49:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:53.350 10:49:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.350 10:49:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:53.350 10:49:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.350 [2024-07-12 10:49:10.163828] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:53.350 [2024-07-12 10:49:10.163900] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.350 EAL: No free 2048 kB hugepages reported on node 1 00:10:53.350 [2024-07-12 10:49:10.251790] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:53.350 [2024-07-12 10:49:10.319167] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:53.350 [2024-07-12 10:49:10.319211] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:53.350 [2024-07-12 10:49:10.319217] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:53.350 [2024-07-12 10:49:10.319221] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:53.350 [2024-07-12 10:49:10.319225] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:53.350 [2024-07-12 10:49:10.319396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.350 [2024-07-12 10:49:10.319554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.350 [2024-07-12 10:49:10.319556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:54.291 10:49:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:54.291 10:49:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:10:54.291 10:49:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:54.291 10:49:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:54.291 10:49:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.291 10:49:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:54.291 10:49:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:54.291 10:49:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.291 10:49:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.291 [2024-07-12 10:49:10.988417] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:54.291 10:49:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.291 10:49:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:54.291 10:49:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.291 10:49:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.291 10:49:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.291 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:54.291 10:49:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.291 10:49:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.291 [2024-07-12 10:49:11.012781] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:54.291 10:49:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.291 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:54.291 10:49:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.292 NULL1 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1975245 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:54.292 EAL: No free 2048 kB hugepages reported on node 1 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1975245 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.292 10:49:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.553 10:49:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.553 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1975245 00:10:54.553 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.553 10:49:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.553 10:49:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.814 10:49:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.814 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1975245 00:10:54.814 10:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.814 10:49:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.814 10:49:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.385 10:49:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.385 10:49:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1975245 00:10:55.385 10:49:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.385 10:49:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.385 10:49:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.645 10:49:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.646 10:49:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1975245 00:10:55.646 10:49:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.646 10:49:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.646 10:49:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.962 10:49:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.962 10:49:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1975245 00:10:55.962 10:49:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.962 10:49:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.962 10:49:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.245 10:49:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.245 10:49:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1975245 00:10:56.245 10:49:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.245 10:49:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.245 10:49:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.504 10:49:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.504 10:49:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1975245 00:10:56.504 10:49:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.504 10:49:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.504 10:49:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.764 10:49:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.764 10:49:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1975245 00:10:56.764 10:49:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.764 10:49:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.764 10:49:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.334 10:49:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.334 10:49:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1975245 00:10:57.334 10:49:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.334 10:49:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.334 10:49:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.594 10:49:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.595 10:49:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1975245 00:10:57.595 10:49:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.595 10:49:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.595 10:49:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.855 10:49:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.855 10:49:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1975245 00:10:57.855 10:49:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.855 10:49:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.855 10:49:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:58.116 10:49:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.116 10:49:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1975245 00:10:58.116 10:49:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.116 10:49:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.116 10:49:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:58.687 10:49:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.687 10:49:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1975245 00:10:58.687 10:49:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.687 10:49:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.687 10:49:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:58.947 10:49:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.947 10:49:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1975245 00:10:58.947 10:49:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.947 10:49:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.947 10:49:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.207 10:49:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.207 10:49:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1975245 00:10:59.207 10:49:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.207 10:49:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.207 10:49:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.467 10:49:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.467 10:49:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1975245 00:10:59.467 10:49:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.467 10:49:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.467 10:49:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.726 10:49:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.726 10:49:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1975245 00:10:59.726 10:49:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.726 10:49:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.726 10:49:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.296 10:49:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.296 10:49:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1975245 00:11:00.296 10:49:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:00.296 10:49:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.296 10:49:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.556 10:49:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.556 10:49:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1975245 00:11:00.556 10:49:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:00.556 10:49:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.556 10:49:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.816 10:49:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.816 10:49:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1975245 00:11:00.816 10:49:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:00.816 10:49:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.816 10:49:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.076 10:49:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.076 10:49:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1975245 00:11:01.076 10:49:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:01.076 10:49:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.076 10:49:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.336 10:49:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.336 10:49:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1975245 00:11:01.336 10:49:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:01.336 10:49:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.336 10:49:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.907 10:49:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.907 10:49:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1975245 00:11:01.907 10:49:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:01.907 10:49:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.907 10:49:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.167 10:49:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.167 10:49:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1975245 00:11:02.167 10:49:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.167 10:49:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.167 10:49:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.427 10:49:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.427 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1975245 00:11:02.427 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.427 10:49:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.427 10:49:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.687 10:49:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.687 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1975245 00:11:02.687 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.687 10:49:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.687 10:49:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.948 10:49:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.948 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1975245 00:11:02.948 10:49:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.948 10:49:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.948 10:49:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.518 10:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.518 10:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1975245 00:11:03.518 10:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.518 10:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.518 10:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.779 10:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.779 10:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1975245 00:11:03.779 10:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.779 10:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.779 10:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.038 10:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.038 10:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1975245 00:11:04.038 10:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.038 10:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.038 10:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.298 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:04.298 10:49:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.298 10:49:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1975245 00:11:04.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1975245) - No such process 00:11:04.298 10:49:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1975245 00:11:04.298 10:49:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:04.298 10:49:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:04.298 10:49:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:04.298 10:49:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:04.298 10:49:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:11:04.298 10:49:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:04.298 10:49:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:11:04.298 10:49:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:04.298 10:49:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:04.298 rmmod nvme_tcp 00:11:04.298 rmmod nvme_fabrics 00:11:04.298 rmmod nvme_keyring 00:11:04.559 10:49:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:04.559 10:49:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:11:04.559 10:49:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:11:04.559 10:49:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1975215 ']' 00:11:04.559 10:49:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1975215 00:11:04.559 10:49:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 1975215 ']' 00:11:04.559 10:49:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 1975215 00:11:04.559 10:49:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:11:04.559 10:49:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:04.559 10:49:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1975215 00:11:04.559 10:49:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:04.559 10:49:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:04.559 10:49:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1975215' 00:11:04.559 killing process with pid 1975215 00:11:04.559 10:49:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 1975215 00:11:04.559 10:49:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 1975215 00:11:04.559 10:49:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:04.559 10:49:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:04.559 10:49:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:04.559 10:49:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:04.559 10:49:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:04.559 10:49:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.559 10:49:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:04.559 10:49:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.100 10:49:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:07.100 00:11:07.100 real 0m20.744s 00:11:07.100 user 0m42.075s 00:11:07.100 sys 0m8.676s 00:11:07.100 10:49:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:07.100 10:49:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.100 ************************************ 00:11:07.100 END TEST nvmf_connect_stress 00:11:07.100 ************************************ 00:11:07.100 10:49:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:07.100 10:49:23 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:07.100 10:49:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:07.100 10:49:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:07.100 10:49:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:07.100 ************************************ 00:11:07.100 START TEST nvmf_fused_ordering 00:11:07.100 ************************************ 00:11:07.100 10:49:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:07.100 * Looking for test storage... 00:11:07.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:07.100 10:49:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:07.100 10:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:07.100 10:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:07.100 10:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:07.100 10:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:07.100 10:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:07.100 10:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:07.100 10:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:07.100 10:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:07.100 10:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:07.100 10:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:07.100 10:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:07.100 10:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:07.100 10:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:07.100 10:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:07.100 10:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:07.100 10:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:07.100 10:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:07.100 10:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:07.100 10:49:23 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:07.100 10:49:23 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:07.100 10:49:23 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:07.100 10:49:23 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.101 10:49:23 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.101 10:49:23 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.101 10:49:23 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:07.101 10:49:23 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.101 10:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:07.101 10:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:07.101 10:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:07.101 10:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:07.101 10:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:07.101 10:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:07.101 10:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:07.101 10:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:07.101 10:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:07.101 10:49:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:07.101 10:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:07.101 10:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:07.101 10:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:07.101 10:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:07.101 10:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:07.101 10:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.101 10:49:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:07.101 10:49:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.101 10:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:07.101 10:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:07.101 10:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:07.101 10:49:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:15.244 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:15.244 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:15.244 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:15.244 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:15.244 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:15.244 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:15.244 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:15.244 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:15.244 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:15.244 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:15.244 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:15.244 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:15.244 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:15.244 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:15.244 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:15.244 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:15.244 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:15.244 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:15.244 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:15.244 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:15.244 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:15.244 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:15.244 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:15.244 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:15.244 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:15.244 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:15.244 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:15.244 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:15.244 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:15.244 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:15.244 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:15.244 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:15.244 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:15.245 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:15.245 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:15.245 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:15.245 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:15.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:15.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.715 ms 00:11:15.245 00:11:15.245 --- 10.0.0.2 ping statistics --- 00:11:15.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.245 rtt min/avg/max/mdev = 0.715/0.715/0.715/0.000 ms 00:11:15.245 10:49:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:15.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:15.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.357 ms 00:11:15.245 00:11:15.245 --- 10.0.0.1 ping statistics --- 00:11:15.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.245 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1981597 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1981597 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 1981597 ']' 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:15.245 [2024-07-12 10:49:31.114868] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:15.245 [2024-07-12 10:49:31.114963] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.245 EAL: No free 2048 kB hugepages reported on node 1 00:11:15.245 [2024-07-12 10:49:31.205197] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.245 [2024-07-12 10:49:31.297330] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.245 [2024-07-12 10:49:31.297386] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.245 [2024-07-12 10:49:31.297400] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:15.245 [2024-07-12 10:49:31.297407] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:15.245 [2024-07-12 10:49:31.297412] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.245 [2024-07-12 10:49:31.297438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:15.245 [2024-07-12 10:49:31.948191] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.245 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.246 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:15.246 [2024-07-12 10:49:31.972398] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.246 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.246 10:49:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:15.246 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.246 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:15.246 NULL1 00:11:15.246 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.246 10:49:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:15.246 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.246 10:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:15.246 10:49:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.246 10:49:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:15.246 10:49:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.246 10:49:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:15.246 10:49:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.246 10:49:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:15.246 [2024-07-12 10:49:32.040948] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:15.246 [2024-07-12 10:49:32.041003] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1981684 ] 00:11:15.246 EAL: No free 2048 kB hugepages reported on node 1 00:11:15.818 Attached to nqn.2016-06.io.spdk:cnode1 00:11:15.818 Namespace ID: 1 size: 1GB 00:11:15.818 fused_ordering(0) 00:11:15.818 fused_ordering(1) 00:11:15.818 fused_ordering(2) 00:11:15.818 fused_ordering(3) 00:11:15.818 fused_ordering(4) 00:11:15.818 fused_ordering(5) 00:11:15.818 fused_ordering(6) 00:11:15.818 fused_ordering(7) 00:11:15.818 fused_ordering(8) 00:11:15.818 fused_ordering(9) 00:11:15.818 fused_ordering(10) 00:11:15.818 fused_ordering(11) 00:11:15.818 fused_ordering(12) 00:11:15.818 fused_ordering(13) 00:11:15.818 fused_ordering(14) 00:11:15.818 fused_ordering(15) 00:11:15.818 fused_ordering(16) 00:11:15.818 fused_ordering(17) 00:11:15.818 fused_ordering(18) 00:11:15.818 fused_ordering(19) 00:11:15.818 fused_ordering(20) 00:11:15.818 fused_ordering(21) 00:11:15.818 fused_ordering(22) 00:11:15.818 fused_ordering(23) 00:11:15.818 fused_ordering(24) 00:11:15.818 fused_ordering(25) 00:11:15.818 fused_ordering(26) 00:11:15.818 fused_ordering(27) 00:11:15.818 fused_ordering(28) 00:11:15.818 fused_ordering(29) 00:11:15.818 fused_ordering(30) 00:11:15.818 fused_ordering(31) 00:11:15.818 fused_ordering(32) 00:11:15.818 fused_ordering(33) 00:11:15.818 fused_ordering(34) 00:11:15.818 fused_ordering(35) 00:11:15.818 fused_ordering(36) 00:11:15.818 fused_ordering(37) 00:11:15.818 fused_ordering(38) 00:11:15.818 fused_ordering(39) 00:11:15.818 fused_ordering(40) 00:11:15.818 fused_ordering(41) 00:11:15.818 fused_ordering(42) 00:11:15.818 fused_ordering(43) 00:11:15.818 fused_ordering(44) 00:11:15.818 fused_ordering(45) 00:11:15.818 fused_ordering(46) 00:11:15.818 fused_ordering(47) 00:11:15.818 fused_ordering(48) 00:11:15.818 fused_ordering(49) 00:11:15.818 fused_ordering(50) 00:11:15.818 fused_ordering(51) 00:11:15.818 fused_ordering(52) 00:11:15.818 fused_ordering(53) 00:11:15.818 fused_ordering(54) 00:11:15.818 fused_ordering(55) 00:11:15.818 fused_ordering(56) 00:11:15.818 fused_ordering(57) 00:11:15.818 fused_ordering(58) 00:11:15.818 fused_ordering(59) 00:11:15.818 fused_ordering(60) 00:11:15.818 fused_ordering(61) 00:11:15.818 fused_ordering(62) 00:11:15.818 fused_ordering(63) 00:11:15.818 fused_ordering(64) 00:11:15.818 fused_ordering(65) 00:11:15.818 fused_ordering(66) 00:11:15.818 fused_ordering(67) 00:11:15.818 fused_ordering(68) 00:11:15.818 fused_ordering(69) 00:11:15.818 fused_ordering(70) 00:11:15.818 fused_ordering(71) 00:11:15.818 fused_ordering(72) 00:11:15.818 fused_ordering(73) 00:11:15.818 fused_ordering(74) 00:11:15.818 fused_ordering(75) 00:11:15.818 fused_ordering(76) 00:11:15.818 fused_ordering(77) 00:11:15.818 fused_ordering(78) 00:11:15.818 fused_ordering(79) 00:11:15.818 fused_ordering(80) 00:11:15.818 fused_ordering(81) 00:11:15.818 fused_ordering(82) 00:11:15.818 fused_ordering(83) 00:11:15.818 fused_ordering(84) 00:11:15.818 fused_ordering(85) 00:11:15.818 fused_ordering(86) 00:11:15.818 fused_ordering(87) 00:11:15.818 fused_ordering(88) 00:11:15.818 fused_ordering(89) 00:11:15.818 fused_ordering(90) 00:11:15.818 fused_ordering(91) 00:11:15.818 fused_ordering(92) 00:11:15.818 fused_ordering(93) 00:11:15.818 fused_ordering(94) 00:11:15.818 fused_ordering(95) 00:11:15.818 fused_ordering(96) 00:11:15.818 fused_ordering(97) 00:11:15.818 fused_ordering(98) 00:11:15.818 fused_ordering(99) 00:11:15.818 fused_ordering(100) 00:11:15.818 fused_ordering(101) 00:11:15.818 fused_ordering(102) 00:11:15.818 fused_ordering(103) 00:11:15.818 fused_ordering(104) 00:11:15.818 fused_ordering(105) 00:11:15.818 fused_ordering(106) 00:11:15.818 fused_ordering(107) 00:11:15.818 fused_ordering(108) 00:11:15.818 fused_ordering(109) 00:11:15.818 fused_ordering(110) 00:11:15.818 fused_ordering(111) 00:11:15.818 fused_ordering(112) 00:11:15.818 fused_ordering(113) 00:11:15.818 fused_ordering(114) 00:11:15.818 fused_ordering(115) 00:11:15.818 fused_ordering(116) 00:11:15.818 fused_ordering(117) 00:11:15.818 fused_ordering(118) 00:11:15.818 fused_ordering(119) 00:11:15.818 fused_ordering(120) 00:11:15.818 fused_ordering(121) 00:11:15.818 fused_ordering(122) 00:11:15.818 fused_ordering(123) 00:11:15.818 fused_ordering(124) 00:11:15.818 fused_ordering(125) 00:11:15.818 fused_ordering(126) 00:11:15.818 fused_ordering(127) 00:11:15.818 fused_ordering(128) 00:11:15.818 fused_ordering(129) 00:11:15.818 fused_ordering(130) 00:11:15.818 fused_ordering(131) 00:11:15.818 fused_ordering(132) 00:11:15.818 fused_ordering(133) 00:11:15.818 fused_ordering(134) 00:11:15.818 fused_ordering(135) 00:11:15.818 fused_ordering(136) 00:11:15.818 fused_ordering(137) 00:11:15.818 fused_ordering(138) 00:11:15.818 fused_ordering(139) 00:11:15.818 fused_ordering(140) 00:11:15.818 fused_ordering(141) 00:11:15.818 fused_ordering(142) 00:11:15.818 fused_ordering(143) 00:11:15.818 fused_ordering(144) 00:11:15.818 fused_ordering(145) 00:11:15.818 fused_ordering(146) 00:11:15.818 fused_ordering(147) 00:11:15.818 fused_ordering(148) 00:11:15.818 fused_ordering(149) 00:11:15.818 fused_ordering(150) 00:11:15.818 fused_ordering(151) 00:11:15.818 fused_ordering(152) 00:11:15.818 fused_ordering(153) 00:11:15.818 fused_ordering(154) 00:11:15.818 fused_ordering(155) 00:11:15.818 fused_ordering(156) 00:11:15.818 fused_ordering(157) 00:11:15.818 fused_ordering(158) 00:11:15.818 fused_ordering(159) 00:11:15.818 fused_ordering(160) 00:11:15.818 fused_ordering(161) 00:11:15.818 fused_ordering(162) 00:11:15.818 fused_ordering(163) 00:11:15.818 fused_ordering(164) 00:11:15.818 fused_ordering(165) 00:11:15.818 fused_ordering(166) 00:11:15.818 fused_ordering(167) 00:11:15.818 fused_ordering(168) 00:11:15.818 fused_ordering(169) 00:11:15.818 fused_ordering(170) 00:11:15.818 fused_ordering(171) 00:11:15.818 fused_ordering(172) 00:11:15.818 fused_ordering(173) 00:11:15.818 fused_ordering(174) 00:11:15.818 fused_ordering(175) 00:11:15.818 fused_ordering(176) 00:11:15.818 fused_ordering(177) 00:11:15.818 fused_ordering(178) 00:11:15.818 fused_ordering(179) 00:11:15.818 fused_ordering(180) 00:11:15.818 fused_ordering(181) 00:11:15.818 fused_ordering(182) 00:11:15.818 fused_ordering(183) 00:11:15.818 fused_ordering(184) 00:11:15.818 fused_ordering(185) 00:11:15.818 fused_ordering(186) 00:11:15.818 fused_ordering(187) 00:11:15.818 fused_ordering(188) 00:11:15.818 fused_ordering(189) 00:11:15.818 fused_ordering(190) 00:11:15.818 fused_ordering(191) 00:11:15.818 fused_ordering(192) 00:11:15.818 fused_ordering(193) 00:11:15.818 fused_ordering(194) 00:11:15.818 fused_ordering(195) 00:11:15.818 fused_ordering(196) 00:11:15.818 fused_ordering(197) 00:11:15.818 fused_ordering(198) 00:11:15.818 fused_ordering(199) 00:11:15.818 fused_ordering(200) 00:11:15.818 fused_ordering(201) 00:11:15.818 fused_ordering(202) 00:11:15.818 fused_ordering(203) 00:11:15.818 fused_ordering(204) 00:11:15.818 fused_ordering(205) 00:11:16.079 fused_ordering(206) 00:11:16.079 fused_ordering(207) 00:11:16.079 fused_ordering(208) 00:11:16.079 fused_ordering(209) 00:11:16.079 fused_ordering(210) 00:11:16.079 fused_ordering(211) 00:11:16.079 fused_ordering(212) 00:11:16.079 fused_ordering(213) 00:11:16.079 fused_ordering(214) 00:11:16.079 fused_ordering(215) 00:11:16.079 fused_ordering(216) 00:11:16.079 fused_ordering(217) 00:11:16.079 fused_ordering(218) 00:11:16.079 fused_ordering(219) 00:11:16.079 fused_ordering(220) 00:11:16.079 fused_ordering(221) 00:11:16.079 fused_ordering(222) 00:11:16.079 fused_ordering(223) 00:11:16.079 fused_ordering(224) 00:11:16.079 fused_ordering(225) 00:11:16.079 fused_ordering(226) 00:11:16.079 fused_ordering(227) 00:11:16.079 fused_ordering(228) 00:11:16.079 fused_ordering(229) 00:11:16.079 fused_ordering(230) 00:11:16.079 fused_ordering(231) 00:11:16.079 fused_ordering(232) 00:11:16.079 fused_ordering(233) 00:11:16.079 fused_ordering(234) 00:11:16.079 fused_ordering(235) 00:11:16.079 fused_ordering(236) 00:11:16.079 fused_ordering(237) 00:11:16.079 fused_ordering(238) 00:11:16.079 fused_ordering(239) 00:11:16.080 fused_ordering(240) 00:11:16.080 fused_ordering(241) 00:11:16.080 fused_ordering(242) 00:11:16.080 fused_ordering(243) 00:11:16.080 fused_ordering(244) 00:11:16.080 fused_ordering(245) 00:11:16.080 fused_ordering(246) 00:11:16.080 fused_ordering(247) 00:11:16.080 fused_ordering(248) 00:11:16.080 fused_ordering(249) 00:11:16.080 fused_ordering(250) 00:11:16.080 fused_ordering(251) 00:11:16.080 fused_ordering(252) 00:11:16.080 fused_ordering(253) 00:11:16.080 fused_ordering(254) 00:11:16.080 fused_ordering(255) 00:11:16.080 fused_ordering(256) 00:11:16.080 fused_ordering(257) 00:11:16.080 fused_ordering(258) 00:11:16.080 fused_ordering(259) 00:11:16.080 fused_ordering(260) 00:11:16.080 fused_ordering(261) 00:11:16.080 fused_ordering(262) 00:11:16.080 fused_ordering(263) 00:11:16.080 fused_ordering(264) 00:11:16.080 fused_ordering(265) 00:11:16.080 fused_ordering(266) 00:11:16.080 fused_ordering(267) 00:11:16.080 fused_ordering(268) 00:11:16.080 fused_ordering(269) 00:11:16.080 fused_ordering(270) 00:11:16.080 fused_ordering(271) 00:11:16.080 fused_ordering(272) 00:11:16.080 fused_ordering(273) 00:11:16.080 fused_ordering(274) 00:11:16.080 fused_ordering(275) 00:11:16.080 fused_ordering(276) 00:11:16.080 fused_ordering(277) 00:11:16.080 fused_ordering(278) 00:11:16.080 fused_ordering(279) 00:11:16.080 fused_ordering(280) 00:11:16.080 fused_ordering(281) 00:11:16.080 fused_ordering(282) 00:11:16.080 fused_ordering(283) 00:11:16.080 fused_ordering(284) 00:11:16.080 fused_ordering(285) 00:11:16.080 fused_ordering(286) 00:11:16.080 fused_ordering(287) 00:11:16.080 fused_ordering(288) 00:11:16.080 fused_ordering(289) 00:11:16.080 fused_ordering(290) 00:11:16.080 fused_ordering(291) 00:11:16.080 fused_ordering(292) 00:11:16.080 fused_ordering(293) 00:11:16.080 fused_ordering(294) 00:11:16.080 fused_ordering(295) 00:11:16.080 fused_ordering(296) 00:11:16.080 fused_ordering(297) 00:11:16.080 fused_ordering(298) 00:11:16.080 fused_ordering(299) 00:11:16.080 fused_ordering(300) 00:11:16.080 fused_ordering(301) 00:11:16.080 fused_ordering(302) 00:11:16.080 fused_ordering(303) 00:11:16.080 fused_ordering(304) 00:11:16.080 fused_ordering(305) 00:11:16.080 fused_ordering(306) 00:11:16.080 fused_ordering(307) 00:11:16.080 fused_ordering(308) 00:11:16.080 fused_ordering(309) 00:11:16.080 fused_ordering(310) 00:11:16.080 fused_ordering(311) 00:11:16.080 fused_ordering(312) 00:11:16.080 fused_ordering(313) 00:11:16.080 fused_ordering(314) 00:11:16.080 fused_ordering(315) 00:11:16.080 fused_ordering(316) 00:11:16.080 fused_ordering(317) 00:11:16.080 fused_ordering(318) 00:11:16.080 fused_ordering(319) 00:11:16.080 fused_ordering(320) 00:11:16.080 fused_ordering(321) 00:11:16.080 fused_ordering(322) 00:11:16.080 fused_ordering(323) 00:11:16.080 fused_ordering(324) 00:11:16.080 fused_ordering(325) 00:11:16.080 fused_ordering(326) 00:11:16.080 fused_ordering(327) 00:11:16.080 fused_ordering(328) 00:11:16.080 fused_ordering(329) 00:11:16.080 fused_ordering(330) 00:11:16.080 fused_ordering(331) 00:11:16.080 fused_ordering(332) 00:11:16.080 fused_ordering(333) 00:11:16.080 fused_ordering(334) 00:11:16.080 fused_ordering(335) 00:11:16.080 fused_ordering(336) 00:11:16.080 fused_ordering(337) 00:11:16.080 fused_ordering(338) 00:11:16.080 fused_ordering(339) 00:11:16.080 fused_ordering(340) 00:11:16.080 fused_ordering(341) 00:11:16.080 fused_ordering(342) 00:11:16.080 fused_ordering(343) 00:11:16.080 fused_ordering(344) 00:11:16.080 fused_ordering(345) 00:11:16.080 fused_ordering(346) 00:11:16.080 fused_ordering(347) 00:11:16.080 fused_ordering(348) 00:11:16.080 fused_ordering(349) 00:11:16.080 fused_ordering(350) 00:11:16.080 fused_ordering(351) 00:11:16.080 fused_ordering(352) 00:11:16.080 fused_ordering(353) 00:11:16.080 fused_ordering(354) 00:11:16.080 fused_ordering(355) 00:11:16.080 fused_ordering(356) 00:11:16.080 fused_ordering(357) 00:11:16.080 fused_ordering(358) 00:11:16.080 fused_ordering(359) 00:11:16.080 fused_ordering(360) 00:11:16.080 fused_ordering(361) 00:11:16.080 fused_ordering(362) 00:11:16.080 fused_ordering(363) 00:11:16.080 fused_ordering(364) 00:11:16.080 fused_ordering(365) 00:11:16.080 fused_ordering(366) 00:11:16.080 fused_ordering(367) 00:11:16.080 fused_ordering(368) 00:11:16.080 fused_ordering(369) 00:11:16.080 fused_ordering(370) 00:11:16.080 fused_ordering(371) 00:11:16.080 fused_ordering(372) 00:11:16.080 fused_ordering(373) 00:11:16.080 fused_ordering(374) 00:11:16.080 fused_ordering(375) 00:11:16.080 fused_ordering(376) 00:11:16.080 fused_ordering(377) 00:11:16.080 fused_ordering(378) 00:11:16.080 fused_ordering(379) 00:11:16.080 fused_ordering(380) 00:11:16.080 fused_ordering(381) 00:11:16.080 fused_ordering(382) 00:11:16.080 fused_ordering(383) 00:11:16.080 fused_ordering(384) 00:11:16.080 fused_ordering(385) 00:11:16.080 fused_ordering(386) 00:11:16.080 fused_ordering(387) 00:11:16.080 fused_ordering(388) 00:11:16.080 fused_ordering(389) 00:11:16.080 fused_ordering(390) 00:11:16.080 fused_ordering(391) 00:11:16.080 fused_ordering(392) 00:11:16.080 fused_ordering(393) 00:11:16.080 fused_ordering(394) 00:11:16.080 fused_ordering(395) 00:11:16.080 fused_ordering(396) 00:11:16.080 fused_ordering(397) 00:11:16.080 fused_ordering(398) 00:11:16.080 fused_ordering(399) 00:11:16.080 fused_ordering(400) 00:11:16.080 fused_ordering(401) 00:11:16.080 fused_ordering(402) 00:11:16.080 fused_ordering(403) 00:11:16.080 fused_ordering(404) 00:11:16.080 fused_ordering(405) 00:11:16.080 fused_ordering(406) 00:11:16.080 fused_ordering(407) 00:11:16.080 fused_ordering(408) 00:11:16.080 fused_ordering(409) 00:11:16.080 fused_ordering(410) 00:11:16.651 fused_ordering(411) 00:11:16.651 fused_ordering(412) 00:11:16.651 fused_ordering(413) 00:11:16.651 fused_ordering(414) 00:11:16.651 fused_ordering(415) 00:11:16.651 fused_ordering(416) 00:11:16.651 fused_ordering(417) 00:11:16.651 fused_ordering(418) 00:11:16.651 fused_ordering(419) 00:11:16.651 fused_ordering(420) 00:11:16.651 fused_ordering(421) 00:11:16.651 fused_ordering(422) 00:11:16.651 fused_ordering(423) 00:11:16.651 fused_ordering(424) 00:11:16.651 fused_ordering(425) 00:11:16.651 fused_ordering(426) 00:11:16.651 fused_ordering(427) 00:11:16.651 fused_ordering(428) 00:11:16.651 fused_ordering(429) 00:11:16.651 fused_ordering(430) 00:11:16.651 fused_ordering(431) 00:11:16.651 fused_ordering(432) 00:11:16.651 fused_ordering(433) 00:11:16.651 fused_ordering(434) 00:11:16.651 fused_ordering(435) 00:11:16.651 fused_ordering(436) 00:11:16.651 fused_ordering(437) 00:11:16.651 fused_ordering(438) 00:11:16.651 fused_ordering(439) 00:11:16.651 fused_ordering(440) 00:11:16.651 fused_ordering(441) 00:11:16.651 fused_ordering(442) 00:11:16.651 fused_ordering(443) 00:11:16.651 fused_ordering(444) 00:11:16.651 fused_ordering(445) 00:11:16.651 fused_ordering(446) 00:11:16.651 fused_ordering(447) 00:11:16.651 fused_ordering(448) 00:11:16.651 fused_ordering(449) 00:11:16.651 fused_ordering(450) 00:11:16.651 fused_ordering(451) 00:11:16.651 fused_ordering(452) 00:11:16.651 fused_ordering(453) 00:11:16.651 fused_ordering(454) 00:11:16.651 fused_ordering(455) 00:11:16.651 fused_ordering(456) 00:11:16.651 fused_ordering(457) 00:11:16.651 fused_ordering(458) 00:11:16.651 fused_ordering(459) 00:11:16.651 fused_ordering(460) 00:11:16.651 fused_ordering(461) 00:11:16.651 fused_ordering(462) 00:11:16.651 fused_ordering(463) 00:11:16.651 fused_ordering(464) 00:11:16.651 fused_ordering(465) 00:11:16.651 fused_ordering(466) 00:11:16.651 fused_ordering(467) 00:11:16.651 fused_ordering(468) 00:11:16.651 fused_ordering(469) 00:11:16.651 fused_ordering(470) 00:11:16.651 fused_ordering(471) 00:11:16.651 fused_ordering(472) 00:11:16.651 fused_ordering(473) 00:11:16.651 fused_ordering(474) 00:11:16.651 fused_ordering(475) 00:11:16.651 fused_ordering(476) 00:11:16.651 fused_ordering(477) 00:11:16.651 fused_ordering(478) 00:11:16.651 fused_ordering(479) 00:11:16.651 fused_ordering(480) 00:11:16.651 fused_ordering(481) 00:11:16.651 fused_ordering(482) 00:11:16.651 fused_ordering(483) 00:11:16.651 fused_ordering(484) 00:11:16.651 fused_ordering(485) 00:11:16.651 fused_ordering(486) 00:11:16.651 fused_ordering(487) 00:11:16.651 fused_ordering(488) 00:11:16.651 fused_ordering(489) 00:11:16.651 fused_ordering(490) 00:11:16.651 fused_ordering(491) 00:11:16.651 fused_ordering(492) 00:11:16.652 fused_ordering(493) 00:11:16.652 fused_ordering(494) 00:11:16.652 fused_ordering(495) 00:11:16.652 fused_ordering(496) 00:11:16.652 fused_ordering(497) 00:11:16.652 fused_ordering(498) 00:11:16.652 fused_ordering(499) 00:11:16.652 fused_ordering(500) 00:11:16.652 fused_ordering(501) 00:11:16.652 fused_ordering(502) 00:11:16.652 fused_ordering(503) 00:11:16.652 fused_ordering(504) 00:11:16.652 fused_ordering(505) 00:11:16.652 fused_ordering(506) 00:11:16.652 fused_ordering(507) 00:11:16.652 fused_ordering(508) 00:11:16.652 fused_ordering(509) 00:11:16.652 fused_ordering(510) 00:11:16.652 fused_ordering(511) 00:11:16.652 fused_ordering(512) 00:11:16.652 fused_ordering(513) 00:11:16.652 fused_ordering(514) 00:11:16.652 fused_ordering(515) 00:11:16.652 fused_ordering(516) 00:11:16.652 fused_ordering(517) 00:11:16.652 fused_ordering(518) 00:11:16.652 fused_ordering(519) 00:11:16.652 fused_ordering(520) 00:11:16.652 fused_ordering(521) 00:11:16.652 fused_ordering(522) 00:11:16.652 fused_ordering(523) 00:11:16.652 fused_ordering(524) 00:11:16.652 fused_ordering(525) 00:11:16.652 fused_ordering(526) 00:11:16.652 fused_ordering(527) 00:11:16.652 fused_ordering(528) 00:11:16.652 fused_ordering(529) 00:11:16.652 fused_ordering(530) 00:11:16.652 fused_ordering(531) 00:11:16.652 fused_ordering(532) 00:11:16.652 fused_ordering(533) 00:11:16.652 fused_ordering(534) 00:11:16.652 fused_ordering(535) 00:11:16.652 fused_ordering(536) 00:11:16.652 fused_ordering(537) 00:11:16.652 fused_ordering(538) 00:11:16.652 fused_ordering(539) 00:11:16.652 fused_ordering(540) 00:11:16.652 fused_ordering(541) 00:11:16.652 fused_ordering(542) 00:11:16.652 fused_ordering(543) 00:11:16.652 fused_ordering(544) 00:11:16.652 fused_ordering(545) 00:11:16.652 fused_ordering(546) 00:11:16.652 fused_ordering(547) 00:11:16.652 fused_ordering(548) 00:11:16.652 fused_ordering(549) 00:11:16.652 fused_ordering(550) 00:11:16.652 fused_ordering(551) 00:11:16.652 fused_ordering(552) 00:11:16.652 fused_ordering(553) 00:11:16.652 fused_ordering(554) 00:11:16.652 fused_ordering(555) 00:11:16.652 fused_ordering(556) 00:11:16.652 fused_ordering(557) 00:11:16.652 fused_ordering(558) 00:11:16.652 fused_ordering(559) 00:11:16.652 fused_ordering(560) 00:11:16.652 fused_ordering(561) 00:11:16.652 fused_ordering(562) 00:11:16.652 fused_ordering(563) 00:11:16.652 fused_ordering(564) 00:11:16.652 fused_ordering(565) 00:11:16.652 fused_ordering(566) 00:11:16.652 fused_ordering(567) 00:11:16.652 fused_ordering(568) 00:11:16.652 fused_ordering(569) 00:11:16.652 fused_ordering(570) 00:11:16.652 fused_ordering(571) 00:11:16.652 fused_ordering(572) 00:11:16.652 fused_ordering(573) 00:11:16.652 fused_ordering(574) 00:11:16.652 fused_ordering(575) 00:11:16.652 fused_ordering(576) 00:11:16.652 fused_ordering(577) 00:11:16.652 fused_ordering(578) 00:11:16.652 fused_ordering(579) 00:11:16.652 fused_ordering(580) 00:11:16.652 fused_ordering(581) 00:11:16.652 fused_ordering(582) 00:11:16.652 fused_ordering(583) 00:11:16.652 fused_ordering(584) 00:11:16.652 fused_ordering(585) 00:11:16.652 fused_ordering(586) 00:11:16.652 fused_ordering(587) 00:11:16.652 fused_ordering(588) 00:11:16.652 fused_ordering(589) 00:11:16.652 fused_ordering(590) 00:11:16.652 fused_ordering(591) 00:11:16.652 fused_ordering(592) 00:11:16.652 fused_ordering(593) 00:11:16.652 fused_ordering(594) 00:11:16.652 fused_ordering(595) 00:11:16.652 fused_ordering(596) 00:11:16.652 fused_ordering(597) 00:11:16.652 fused_ordering(598) 00:11:16.652 fused_ordering(599) 00:11:16.652 fused_ordering(600) 00:11:16.652 fused_ordering(601) 00:11:16.652 fused_ordering(602) 00:11:16.652 fused_ordering(603) 00:11:16.652 fused_ordering(604) 00:11:16.652 fused_ordering(605) 00:11:16.652 fused_ordering(606) 00:11:16.652 fused_ordering(607) 00:11:16.652 fused_ordering(608) 00:11:16.652 fused_ordering(609) 00:11:16.652 fused_ordering(610) 00:11:16.652 fused_ordering(611) 00:11:16.652 fused_ordering(612) 00:11:16.652 fused_ordering(613) 00:11:16.652 fused_ordering(614) 00:11:16.652 fused_ordering(615) 00:11:17.225 fused_ordering(616) 00:11:17.225 fused_ordering(617) 00:11:17.225 fused_ordering(618) 00:11:17.225 fused_ordering(619) 00:11:17.225 fused_ordering(620) 00:11:17.225 fused_ordering(621) 00:11:17.225 fused_ordering(622) 00:11:17.225 fused_ordering(623) 00:11:17.225 fused_ordering(624) 00:11:17.225 fused_ordering(625) 00:11:17.225 fused_ordering(626) 00:11:17.225 fused_ordering(627) 00:11:17.225 fused_ordering(628) 00:11:17.225 fused_ordering(629) 00:11:17.225 fused_ordering(630) 00:11:17.225 fused_ordering(631) 00:11:17.225 fused_ordering(632) 00:11:17.225 fused_ordering(633) 00:11:17.225 fused_ordering(634) 00:11:17.225 fused_ordering(635) 00:11:17.225 fused_ordering(636) 00:11:17.225 fused_ordering(637) 00:11:17.225 fused_ordering(638) 00:11:17.225 fused_ordering(639) 00:11:17.225 fused_ordering(640) 00:11:17.225 fused_ordering(641) 00:11:17.225 fused_ordering(642) 00:11:17.225 fused_ordering(643) 00:11:17.225 fused_ordering(644) 00:11:17.225 fused_ordering(645) 00:11:17.225 fused_ordering(646) 00:11:17.225 fused_ordering(647) 00:11:17.225 fused_ordering(648) 00:11:17.225 fused_ordering(649) 00:11:17.225 fused_ordering(650) 00:11:17.225 fused_ordering(651) 00:11:17.225 fused_ordering(652) 00:11:17.225 fused_ordering(653) 00:11:17.225 fused_ordering(654) 00:11:17.225 fused_ordering(655) 00:11:17.225 fused_ordering(656) 00:11:17.225 fused_ordering(657) 00:11:17.225 fused_ordering(658) 00:11:17.225 fused_ordering(659) 00:11:17.225 fused_ordering(660) 00:11:17.225 fused_ordering(661) 00:11:17.225 fused_ordering(662) 00:11:17.225 fused_ordering(663) 00:11:17.225 fused_ordering(664) 00:11:17.225 fused_ordering(665) 00:11:17.225 fused_ordering(666) 00:11:17.225 fused_ordering(667) 00:11:17.225 fused_ordering(668) 00:11:17.225 fused_ordering(669) 00:11:17.225 fused_ordering(670) 00:11:17.225 fused_ordering(671) 00:11:17.225 fused_ordering(672) 00:11:17.225 fused_ordering(673) 00:11:17.225 fused_ordering(674) 00:11:17.225 fused_ordering(675) 00:11:17.225 fused_ordering(676) 00:11:17.225 fused_ordering(677) 00:11:17.225 fused_ordering(678) 00:11:17.225 fused_ordering(679) 00:11:17.225 fused_ordering(680) 00:11:17.225 fused_ordering(681) 00:11:17.225 fused_ordering(682) 00:11:17.225 fused_ordering(683) 00:11:17.225 fused_ordering(684) 00:11:17.225 fused_ordering(685) 00:11:17.225 fused_ordering(686) 00:11:17.225 fused_ordering(687) 00:11:17.225 fused_ordering(688) 00:11:17.225 fused_ordering(689) 00:11:17.225 fused_ordering(690) 00:11:17.225 fused_ordering(691) 00:11:17.225 fused_ordering(692) 00:11:17.225 fused_ordering(693) 00:11:17.225 fused_ordering(694) 00:11:17.225 fused_ordering(695) 00:11:17.225 fused_ordering(696) 00:11:17.225 fused_ordering(697) 00:11:17.225 fused_ordering(698) 00:11:17.225 fused_ordering(699) 00:11:17.225 fused_ordering(700) 00:11:17.225 fused_ordering(701) 00:11:17.225 fused_ordering(702) 00:11:17.225 fused_ordering(703) 00:11:17.225 fused_ordering(704) 00:11:17.225 fused_ordering(705) 00:11:17.225 fused_ordering(706) 00:11:17.225 fused_ordering(707) 00:11:17.225 fused_ordering(708) 00:11:17.225 fused_ordering(709) 00:11:17.226 fused_ordering(710) 00:11:17.226 fused_ordering(711) 00:11:17.226 fused_ordering(712) 00:11:17.226 fused_ordering(713) 00:11:17.226 fused_ordering(714) 00:11:17.226 fused_ordering(715) 00:11:17.226 fused_ordering(716) 00:11:17.226 fused_ordering(717) 00:11:17.226 fused_ordering(718) 00:11:17.226 fused_ordering(719) 00:11:17.226 fused_ordering(720) 00:11:17.226 fused_ordering(721) 00:11:17.226 fused_ordering(722) 00:11:17.226 fused_ordering(723) 00:11:17.226 fused_ordering(724) 00:11:17.226 fused_ordering(725) 00:11:17.226 fused_ordering(726) 00:11:17.226 fused_ordering(727) 00:11:17.226 fused_ordering(728) 00:11:17.226 fused_ordering(729) 00:11:17.226 fused_ordering(730) 00:11:17.226 fused_ordering(731) 00:11:17.226 fused_ordering(732) 00:11:17.226 fused_ordering(733) 00:11:17.226 fused_ordering(734) 00:11:17.226 fused_ordering(735) 00:11:17.226 fused_ordering(736) 00:11:17.226 fused_ordering(737) 00:11:17.226 fused_ordering(738) 00:11:17.226 fused_ordering(739) 00:11:17.226 fused_ordering(740) 00:11:17.226 fused_ordering(741) 00:11:17.226 fused_ordering(742) 00:11:17.226 fused_ordering(743) 00:11:17.226 fused_ordering(744) 00:11:17.226 fused_ordering(745) 00:11:17.226 fused_ordering(746) 00:11:17.226 fused_ordering(747) 00:11:17.226 fused_ordering(748) 00:11:17.226 fused_ordering(749) 00:11:17.226 fused_ordering(750) 00:11:17.226 fused_ordering(751) 00:11:17.226 fused_ordering(752) 00:11:17.226 fused_ordering(753) 00:11:17.226 fused_ordering(754) 00:11:17.226 fused_ordering(755) 00:11:17.226 fused_ordering(756) 00:11:17.226 fused_ordering(757) 00:11:17.226 fused_ordering(758) 00:11:17.226 fused_ordering(759) 00:11:17.226 fused_ordering(760) 00:11:17.226 fused_ordering(761) 00:11:17.226 fused_ordering(762) 00:11:17.226 fused_ordering(763) 00:11:17.226 fused_ordering(764) 00:11:17.226 fused_ordering(765) 00:11:17.226 fused_ordering(766) 00:11:17.226 fused_ordering(767) 00:11:17.226 fused_ordering(768) 00:11:17.226 fused_ordering(769) 00:11:17.226 fused_ordering(770) 00:11:17.226 fused_ordering(771) 00:11:17.226 fused_ordering(772) 00:11:17.226 fused_ordering(773) 00:11:17.226 fused_ordering(774) 00:11:17.226 fused_ordering(775) 00:11:17.226 fused_ordering(776) 00:11:17.226 fused_ordering(777) 00:11:17.226 fused_ordering(778) 00:11:17.226 fused_ordering(779) 00:11:17.226 fused_ordering(780) 00:11:17.226 fused_ordering(781) 00:11:17.226 fused_ordering(782) 00:11:17.226 fused_ordering(783) 00:11:17.226 fused_ordering(784) 00:11:17.226 fused_ordering(785) 00:11:17.226 fused_ordering(786) 00:11:17.226 fused_ordering(787) 00:11:17.226 fused_ordering(788) 00:11:17.226 fused_ordering(789) 00:11:17.226 fused_ordering(790) 00:11:17.226 fused_ordering(791) 00:11:17.226 fused_ordering(792) 00:11:17.226 fused_ordering(793) 00:11:17.226 fused_ordering(794) 00:11:17.226 fused_ordering(795) 00:11:17.226 fused_ordering(796) 00:11:17.226 fused_ordering(797) 00:11:17.226 fused_ordering(798) 00:11:17.226 fused_ordering(799) 00:11:17.226 fused_ordering(800) 00:11:17.226 fused_ordering(801) 00:11:17.226 fused_ordering(802) 00:11:17.226 fused_ordering(803) 00:11:17.226 fused_ordering(804) 00:11:17.226 fused_ordering(805) 00:11:17.226 fused_ordering(806) 00:11:17.226 fused_ordering(807) 00:11:17.226 fused_ordering(808) 00:11:17.226 fused_ordering(809) 00:11:17.226 fused_ordering(810) 00:11:17.226 fused_ordering(811) 00:11:17.226 fused_ordering(812) 00:11:17.226 fused_ordering(813) 00:11:17.226 fused_ordering(814) 00:11:17.226 fused_ordering(815) 00:11:17.226 fused_ordering(816) 00:11:17.226 fused_ordering(817) 00:11:17.226 fused_ordering(818) 00:11:17.226 fused_ordering(819) 00:11:17.226 fused_ordering(820) 00:11:17.797 fused_ordering(821) 00:11:17.797 fused_ordering(822) 00:11:17.797 fused_ordering(823) 00:11:17.797 fused_ordering(824) 00:11:17.797 fused_ordering(825) 00:11:17.797 fused_ordering(826) 00:11:17.797 fused_ordering(827) 00:11:17.797 fused_ordering(828) 00:11:17.797 fused_ordering(829) 00:11:17.797 fused_ordering(830) 00:11:17.797 fused_ordering(831) 00:11:17.797 fused_ordering(832) 00:11:17.797 fused_ordering(833) 00:11:17.797 fused_ordering(834) 00:11:17.797 fused_ordering(835) 00:11:17.797 fused_ordering(836) 00:11:17.797 fused_ordering(837) 00:11:17.797 fused_ordering(838) 00:11:17.797 fused_ordering(839) 00:11:17.797 fused_ordering(840) 00:11:17.797 fused_ordering(841) 00:11:17.797 fused_ordering(842) 00:11:17.797 fused_ordering(843) 00:11:17.797 fused_ordering(844) 00:11:17.797 fused_ordering(845) 00:11:17.797 fused_ordering(846) 00:11:17.797 fused_ordering(847) 00:11:17.797 fused_ordering(848) 00:11:17.797 fused_ordering(849) 00:11:17.797 fused_ordering(850) 00:11:17.797 fused_ordering(851) 00:11:17.797 fused_ordering(852) 00:11:17.797 fused_ordering(853) 00:11:17.797 fused_ordering(854) 00:11:17.797 fused_ordering(855) 00:11:17.797 fused_ordering(856) 00:11:17.797 fused_ordering(857) 00:11:17.797 fused_ordering(858) 00:11:17.797 fused_ordering(859) 00:11:17.797 fused_ordering(860) 00:11:17.797 fused_ordering(861) 00:11:17.797 fused_ordering(862) 00:11:17.797 fused_ordering(863) 00:11:17.797 fused_ordering(864) 00:11:17.797 fused_ordering(865) 00:11:17.797 fused_ordering(866) 00:11:17.797 fused_ordering(867) 00:11:17.797 fused_ordering(868) 00:11:17.797 fused_ordering(869) 00:11:17.797 fused_ordering(870) 00:11:17.797 fused_ordering(871) 00:11:17.797 fused_ordering(872) 00:11:17.797 fused_ordering(873) 00:11:17.797 fused_ordering(874) 00:11:17.798 fused_ordering(875) 00:11:17.798 fused_ordering(876) 00:11:17.798 fused_ordering(877) 00:11:17.798 fused_ordering(878) 00:11:17.798 fused_ordering(879) 00:11:17.798 fused_ordering(880) 00:11:17.798 fused_ordering(881) 00:11:17.798 fused_ordering(882) 00:11:17.798 fused_ordering(883) 00:11:17.798 fused_ordering(884) 00:11:17.798 fused_ordering(885) 00:11:17.798 fused_ordering(886) 00:11:17.798 fused_ordering(887) 00:11:17.798 fused_ordering(888) 00:11:17.798 fused_ordering(889) 00:11:17.798 fused_ordering(890) 00:11:17.798 fused_ordering(891) 00:11:17.798 fused_ordering(892) 00:11:17.798 fused_ordering(893) 00:11:17.798 fused_ordering(894) 00:11:17.798 fused_ordering(895) 00:11:17.798 fused_ordering(896) 00:11:17.798 fused_ordering(897) 00:11:17.798 fused_ordering(898) 00:11:17.798 fused_ordering(899) 00:11:17.798 fused_ordering(900) 00:11:17.798 fused_ordering(901) 00:11:17.798 fused_ordering(902) 00:11:17.798 fused_ordering(903) 00:11:17.798 fused_ordering(904) 00:11:17.798 fused_ordering(905) 00:11:17.798 fused_ordering(906) 00:11:17.798 fused_ordering(907) 00:11:17.798 fused_ordering(908) 00:11:17.798 fused_ordering(909) 00:11:17.798 fused_ordering(910) 00:11:17.798 fused_ordering(911) 00:11:17.798 fused_ordering(912) 00:11:17.798 fused_ordering(913) 00:11:17.798 fused_ordering(914) 00:11:17.798 fused_ordering(915) 00:11:17.798 fused_ordering(916) 00:11:17.798 fused_ordering(917) 00:11:17.798 fused_ordering(918) 00:11:17.798 fused_ordering(919) 00:11:17.798 fused_ordering(920) 00:11:17.798 fused_ordering(921) 00:11:17.798 fused_ordering(922) 00:11:17.798 fused_ordering(923) 00:11:17.798 fused_ordering(924) 00:11:17.798 fused_ordering(925) 00:11:17.798 fused_ordering(926) 00:11:17.798 fused_ordering(927) 00:11:17.798 fused_ordering(928) 00:11:17.798 fused_ordering(929) 00:11:17.798 fused_ordering(930) 00:11:17.798 fused_ordering(931) 00:11:17.798 fused_ordering(932) 00:11:17.798 fused_ordering(933) 00:11:17.798 fused_ordering(934) 00:11:17.798 fused_ordering(935) 00:11:17.798 fused_ordering(936) 00:11:17.798 fused_ordering(937) 00:11:17.798 fused_ordering(938) 00:11:17.798 fused_ordering(939) 00:11:17.798 fused_ordering(940) 00:11:17.798 fused_ordering(941) 00:11:17.798 fused_ordering(942) 00:11:17.798 fused_ordering(943) 00:11:17.798 fused_ordering(944) 00:11:17.798 fused_ordering(945) 00:11:17.798 fused_ordering(946) 00:11:17.798 fused_ordering(947) 00:11:17.798 fused_ordering(948) 00:11:17.798 fused_ordering(949) 00:11:17.798 fused_ordering(950) 00:11:17.798 fused_ordering(951) 00:11:17.798 fused_ordering(952) 00:11:17.798 fused_ordering(953) 00:11:17.798 fused_ordering(954) 00:11:17.798 fused_ordering(955) 00:11:17.798 fused_ordering(956) 00:11:17.798 fused_ordering(957) 00:11:17.798 fused_ordering(958) 00:11:17.798 fused_ordering(959) 00:11:17.798 fused_ordering(960) 00:11:17.798 fused_ordering(961) 00:11:17.798 fused_ordering(962) 00:11:17.798 fused_ordering(963) 00:11:17.798 fused_ordering(964) 00:11:17.798 fused_ordering(965) 00:11:17.798 fused_ordering(966) 00:11:17.798 fused_ordering(967) 00:11:17.798 fused_ordering(968) 00:11:17.798 fused_ordering(969) 00:11:17.798 fused_ordering(970) 00:11:17.798 fused_ordering(971) 00:11:17.798 fused_ordering(972) 00:11:17.798 fused_ordering(973) 00:11:17.798 fused_ordering(974) 00:11:17.798 fused_ordering(975) 00:11:17.798 fused_ordering(976) 00:11:17.798 fused_ordering(977) 00:11:17.798 fused_ordering(978) 00:11:17.798 fused_ordering(979) 00:11:17.798 fused_ordering(980) 00:11:17.798 fused_ordering(981) 00:11:17.798 fused_ordering(982) 00:11:17.798 fused_ordering(983) 00:11:17.798 fused_ordering(984) 00:11:17.798 fused_ordering(985) 00:11:17.798 fused_ordering(986) 00:11:17.798 fused_ordering(987) 00:11:17.798 fused_ordering(988) 00:11:17.798 fused_ordering(989) 00:11:17.798 fused_ordering(990) 00:11:17.798 fused_ordering(991) 00:11:17.798 fused_ordering(992) 00:11:17.798 fused_ordering(993) 00:11:17.798 fused_ordering(994) 00:11:17.798 fused_ordering(995) 00:11:17.798 fused_ordering(996) 00:11:17.798 fused_ordering(997) 00:11:17.798 fused_ordering(998) 00:11:17.798 fused_ordering(999) 00:11:17.798 fused_ordering(1000) 00:11:17.798 fused_ordering(1001) 00:11:17.798 fused_ordering(1002) 00:11:17.798 fused_ordering(1003) 00:11:17.798 fused_ordering(1004) 00:11:17.798 fused_ordering(1005) 00:11:17.798 fused_ordering(1006) 00:11:17.798 fused_ordering(1007) 00:11:17.798 fused_ordering(1008) 00:11:17.798 fused_ordering(1009) 00:11:17.798 fused_ordering(1010) 00:11:17.798 fused_ordering(1011) 00:11:17.798 fused_ordering(1012) 00:11:17.798 fused_ordering(1013) 00:11:17.798 fused_ordering(1014) 00:11:17.798 fused_ordering(1015) 00:11:17.798 fused_ordering(1016) 00:11:17.798 fused_ordering(1017) 00:11:17.798 fused_ordering(1018) 00:11:17.798 fused_ordering(1019) 00:11:17.798 fused_ordering(1020) 00:11:17.798 fused_ordering(1021) 00:11:17.798 fused_ordering(1022) 00:11:17.798 fused_ordering(1023) 00:11:17.798 10:49:34 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:17.798 10:49:34 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:17.798 10:49:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:17.798 10:49:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:17.798 10:49:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:17.798 10:49:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:17.798 10:49:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:17.798 10:49:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:17.798 rmmod nvme_tcp 00:11:17.798 rmmod nvme_fabrics 00:11:17.798 rmmod nvme_keyring 00:11:17.798 10:49:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:17.798 10:49:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:17.798 10:49:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:17.798 10:49:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1981597 ']' 00:11:17.798 10:49:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1981597 00:11:17.798 10:49:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 1981597 ']' 00:11:17.798 10:49:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 1981597 00:11:17.798 10:49:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:11:17.798 10:49:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:17.798 10:49:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1981597 00:11:18.059 10:49:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:18.059 10:49:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:18.059 10:49:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1981597' 00:11:18.059 killing process with pid 1981597 00:11:18.059 10:49:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 1981597 00:11:18.059 10:49:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 1981597 00:11:18.059 10:49:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:18.059 10:49:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:18.059 10:49:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:18.059 10:49:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:18.059 10:49:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:18.059 10:49:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.059 10:49:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:18.059 10:49:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.604 10:49:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:20.604 00:11:20.604 real 0m13.372s 00:11:20.604 user 0m7.174s 00:11:20.604 sys 0m7.240s 00:11:20.604 10:49:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:20.604 10:49:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:20.604 ************************************ 00:11:20.604 END TEST nvmf_fused_ordering 00:11:20.604 ************************************ 00:11:20.604 10:49:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:20.604 10:49:37 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:20.604 10:49:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:20.604 10:49:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:20.604 10:49:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:20.604 ************************************ 00:11:20.604 START TEST nvmf_delete_subsystem 00:11:20.604 ************************************ 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:20.604 * Looking for test storage... 00:11:20.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:20.604 10:49:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:28.748 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:28.748 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:28.748 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:28.749 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:28.749 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:28.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:28.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:11:28.749 00:11:28.749 --- 10.0.0.2 ping statistics --- 00:11:28.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.749 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:28.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:28.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.394 ms 00:11:28.749 00:11:28.749 --- 10.0.0.1 ping statistics --- 00:11:28.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.749 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1986500 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1986500 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 1986500 ']' 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:28.749 10:49:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:28.749 [2024-07-12 10:49:44.650250] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:28.749 [2024-07-12 10:49:44.650316] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.749 EAL: No free 2048 kB hugepages reported on node 1 00:11:28.749 [2024-07-12 10:49:44.736791] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:28.749 [2024-07-12 10:49:44.833884] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:28.749 [2024-07-12 10:49:44.833944] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:28.749 [2024-07-12 10:49:44.833953] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:28.749 [2024-07-12 10:49:44.833960] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:28.749 [2024-07-12 10:49:44.833966] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:28.749 [2024-07-12 10:49:44.834052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.749 [2024-07-12 10:49:44.834054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.749 10:49:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:28.749 10:49:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:11:28.749 10:49:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:28.749 10:49:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:28.749 10:49:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:28.749 10:49:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:28.749 10:49:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:28.749 10:49:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.749 10:49:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:28.749 [2024-07-12 10:49:45.507681] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:28.749 10:49:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.749 10:49:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:28.749 10:49:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.749 10:49:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:28.749 10:49:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.749 10:49:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:28.749 10:49:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.749 10:49:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:28.749 [2024-07-12 10:49:45.531892] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:28.749 10:49:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.749 10:49:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:28.749 10:49:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.749 10:49:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:28.749 NULL1 00:11:28.749 10:49:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.749 10:49:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:28.749 10:49:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.749 10:49:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:28.750 Delay0 00:11:28.750 10:49:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.750 10:49:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:28.750 10:49:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.750 10:49:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:28.750 10:49:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.750 10:49:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1986638 00:11:28.750 10:49:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:28.750 10:49:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:28.750 EAL: No free 2048 kB hugepages reported on node 1 00:11:28.750 [2024-07-12 10:49:45.648767] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:30.777 10:49:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:30.777 10:49:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.777 10:49:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:31.037 Write completed with error (sct=0, sc=8) 00:11:31.037 Write completed with error (sct=0, sc=8) 00:11:31.037 Write completed with error (sct=0, sc=8) 00:11:31.037 starting I/O failed: -6 00:11:31.037 Write completed with error (sct=0, sc=8) 00:11:31.037 Read completed with error (sct=0, sc=8) 00:11:31.037 Read completed with error (sct=0, sc=8) 00:11:31.037 Read completed with error (sct=0, sc=8) 00:11:31.037 starting I/O failed: -6 00:11:31.037 Write completed with error (sct=0, sc=8) 00:11:31.037 Read completed with error (sct=0, sc=8) 00:11:31.037 Read completed with error (sct=0, sc=8) 00:11:31.037 Read completed with error (sct=0, sc=8) 00:11:31.037 starting I/O failed: -6 00:11:31.037 Read completed with error (sct=0, sc=8) 00:11:31.037 Read completed with error (sct=0, sc=8) 00:11:31.037 Read completed with error (sct=0, sc=8) 00:11:31.037 Read completed with error (sct=0, sc=8) 00:11:31.037 starting I/O failed: -6 00:11:31.037 Read completed with error (sct=0, sc=8) 00:11:31.037 Read completed with error (sct=0, sc=8) 00:11:31.037 Read completed with error (sct=0, sc=8) 00:11:31.037 Read completed with error (sct=0, sc=8) 00:11:31.037 starting I/O failed: -6 00:11:31.037 Read completed with error (sct=0, sc=8) 00:11:31.037 Read completed with error (sct=0, sc=8) 00:11:31.037 Write completed with error (sct=0, sc=8) 00:11:31.037 Write completed with error (sct=0, sc=8) 00:11:31.037 starting I/O failed: -6 00:11:31.037 Write completed with error (sct=0, sc=8) 00:11:31.037 Read completed with error (sct=0, sc=8) 00:11:31.037 Read completed with error (sct=0, sc=8) 00:11:31.037 Write completed with error (sct=0, sc=8) 00:11:31.037 starting I/O failed: -6 00:11:31.037 Read completed with error (sct=0, sc=8) 00:11:31.037 Read completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 starting I/O failed: -6 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 starting I/O failed: -6 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 starting I/O failed: -6 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 starting I/O failed: -6 00:11:31.038 starting I/O failed: -6 00:11:31.038 starting I/O failed: -6 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 [2024-07-12 10:49:47.815196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5cd5c0 is same with the state(5) to be set 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 starting I/O failed: -6 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 starting I/O failed: -6 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 starting I/O failed: -6 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 starting I/O failed: -6 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 starting I/O failed: -6 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 starting I/O failed: -6 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 starting I/O failed: -6 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 starting I/O failed: -6 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 starting I/O failed: -6 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 starting I/O failed: -6 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 [2024-07-12 10:49:47.819737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f27b000d2f0 is same with the state(5) to be set 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Write completed with error (sct=0, sc=8) 00:11:31.038 Read completed with error (sct=0, sc=8) 00:11:31.985 [2024-07-12 10:49:48.788011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ceac0 is same with the state(5) to be set 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Write completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Write completed with error (sct=0, sc=8) 00:11:31.985 Write completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Write completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Write completed with error (sct=0, sc=8) 00:11:31.985 Write completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Write completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Write completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 [2024-07-12 10:49:48.818425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5cd3e0 is same with the state(5) to be set 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Write completed with error (sct=0, sc=8) 00:11:31.985 Write completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Write completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Write completed with error (sct=0, sc=8) 00:11:31.985 Write completed with error (sct=0, sc=8) 00:11:31.985 Write completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Write completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 [2024-07-12 10:49:48.818553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5cd7a0 is same with the state(5) to be set 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Write completed with error (sct=0, sc=8) 00:11:31.985 Write completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Write completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Write completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Write completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Write completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Write completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Write completed with error (sct=0, sc=8) 00:11:31.985 [2024-07-12 10:49:48.821164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f27b000d600 is same with the state(5) to be set 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Write completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.985 Read completed with error (sct=0, sc=8) 00:11:31.986 Write completed with error (sct=0, sc=8) 00:11:31.986 Read completed with error (sct=0, sc=8) 00:11:31.986 Read completed with error (sct=0, sc=8) 00:11:31.986 Read completed with error (sct=0, sc=8) 00:11:31.986 Read completed with error (sct=0, sc=8) 00:11:31.986 Read completed with error (sct=0, sc=8) 00:11:31.986 Read completed with error (sct=0, sc=8) 00:11:31.986 Read completed with error (sct=0, sc=8) 00:11:31.986 Read completed with error (sct=0, sc=8) 00:11:31.986 Read completed with error (sct=0, sc=8) 00:11:31.986 Read completed with error (sct=0, sc=8) 00:11:31.986 Write completed with error (sct=0, sc=8) 00:11:31.986 Read completed with error (sct=0, sc=8) 00:11:31.986 Write completed with error (sct=0, sc=8) 00:11:31.986 [2024-07-12 10:49:48.821801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f27b000cfe0 is same with the state(5) to be set 00:11:31.986 Initializing NVMe Controllers 00:11:31.986 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:31.986 Controller IO queue size 128, less than required. 00:11:31.986 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:31.986 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:31.986 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:31.986 Initialization complete. Launching workers. 00:11:31.986 ======================================================== 00:11:31.986 Latency(us) 00:11:31.986 Device Information : IOPS MiB/s Average min max 00:11:31.986 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 168.78 0.08 900303.63 467.55 1006453.66 00:11:31.986 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 159.32 0.08 961342.97 304.66 2001803.09 00:11:31.986 ======================================================== 00:11:31.986 Total : 328.11 0.16 929943.37 304.66 2001803.09 00:11:31.986 00:11:31.986 [2024-07-12 10:49:48.822207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ceac0 (9): Bad file descriptor 00:11:31.986 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:31.986 10:49:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.986 10:49:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:31.986 10:49:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1986638 00:11:31.986 10:49:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:32.560 10:49:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:32.560 10:49:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1986638 00:11:32.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1986638) - No such process 00:11:32.560 10:49:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1986638 00:11:32.560 10:49:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:11:32.560 10:49:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1986638 00:11:32.560 10:49:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:11:32.560 10:49:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:32.560 10:49:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:11:32.560 10:49:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:32.560 10:49:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1986638 00:11:32.560 10:49:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:11:32.560 10:49:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:32.560 10:49:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:32.560 10:49:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:32.560 10:49:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:32.560 10:49:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.560 10:49:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:32.560 10:49:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.560 10:49:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:32.560 10:49:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.560 10:49:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:32.560 [2024-07-12 10:49:49.353074] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:32.560 10:49:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.560 10:49:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:32.560 10:49:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.560 10:49:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:32.560 10:49:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.560 10:49:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1987337 00:11:32.560 10:49:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:32.560 10:49:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:32.560 10:49:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1987337 00:11:32.560 10:49:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:32.560 EAL: No free 2048 kB hugepages reported on node 1 00:11:32.560 [2024-07-12 10:49:49.440853] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:33.131 10:49:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:33.131 10:49:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1987337 00:11:33.131 10:49:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:33.702 10:49:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:33.702 10:49:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1987337 00:11:33.702 10:49:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:33.962 10:49:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:33.962 10:49:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1987337 00:11:33.962 10:49:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:34.533 10:49:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:34.534 10:49:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1987337 00:11:34.534 10:49:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:35.104 10:49:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:35.104 10:49:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1987337 00:11:35.104 10:49:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:35.676 10:49:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:35.676 10:49:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1987337 00:11:35.676 10:49:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:35.937 Initializing NVMe Controllers 00:11:35.937 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:35.937 Controller IO queue size 128, less than required. 00:11:35.937 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:35.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:35.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:35.937 Initialization complete. Launching workers. 00:11:35.937 ======================================================== 00:11:35.937 Latency(us) 00:11:35.937 Device Information : IOPS MiB/s Average min max 00:11:35.937 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002134.66 1000144.94 1041572.19 00:11:35.937 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003279.61 1000328.69 1008922.82 00:11:35.937 ======================================================== 00:11:35.937 Total : 256.00 0.12 1002707.13 1000144.94 1041572.19 00:11:35.937 00:11:35.937 10:49:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:35.937 10:49:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1987337 00:11:35.937 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1987337) - No such process 00:11:35.937 10:49:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1987337 00:11:35.937 10:49:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:35.937 10:49:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:35.937 10:49:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:35.937 10:49:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:11:35.937 10:49:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:35.937 10:49:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:11:35.937 10:49:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:35.937 10:49:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:35.937 rmmod nvme_tcp 00:11:36.199 rmmod nvme_fabrics 00:11:36.199 rmmod nvme_keyring 00:11:36.199 10:49:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:36.199 10:49:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:11:36.199 10:49:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:11:36.199 10:49:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1986500 ']' 00:11:36.199 10:49:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1986500 00:11:36.199 10:49:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 1986500 ']' 00:11:36.199 10:49:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 1986500 00:11:36.199 10:49:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:11:36.199 10:49:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:36.199 10:49:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1986500 00:11:36.199 10:49:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:36.199 10:49:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:36.199 10:49:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1986500' 00:11:36.199 killing process with pid 1986500 00:11:36.199 10:49:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 1986500 00:11:36.199 10:49:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 1986500 00:11:36.199 10:49:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:36.199 10:49:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:36.199 10:49:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:36.199 10:49:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:36.199 10:49:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:36.199 10:49:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.199 10:49:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:36.199 10:49:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.743 10:49:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:38.743 00:11:38.743 real 0m18.132s 00:11:38.743 user 0m31.071s 00:11:38.743 sys 0m6.416s 00:11:38.743 10:49:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:38.743 10:49:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:38.743 ************************************ 00:11:38.743 END TEST nvmf_delete_subsystem 00:11:38.743 ************************************ 00:11:38.743 10:49:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:38.743 10:49:55 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:38.743 10:49:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:38.743 10:49:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:38.743 10:49:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:38.743 ************************************ 00:11:38.743 START TEST nvmf_ns_masking 00:11:38.743 ************************************ 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:38.743 * Looking for test storage... 00:11:38.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=dacf733b-57f0-4c29-9d63-b17fbf180e0e 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=f98efe71-2f5b-4d1d-83e5-470771615c8f 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=c8044d8c-3e8a-42f6-882b-bf22eeaae56f 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:38.743 10:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:45.331 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:45.331 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:45.331 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:45.331 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:45.331 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:45.332 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:45.332 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:45.332 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:45.332 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:45.332 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:45.332 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:45.332 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:45.332 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:45.332 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:45.332 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:45.332 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:45.332 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:45.332 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:45.332 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:45.332 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:45.332 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:45.332 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:45.332 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:45.593 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:45.593 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:45.593 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:45.593 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:45.593 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:45.593 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:45.854 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:45.854 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:45.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:45.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.532 ms 00:11:45.854 00:11:45.854 --- 10.0.0.2 ping statistics --- 00:11:45.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.854 rtt min/avg/max/mdev = 0.532/0.532/0.532/0.000 ms 00:11:45.854 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:45.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:45.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.360 ms 00:11:45.854 00:11:45.854 --- 10.0.0.1 ping statistics --- 00:11:45.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.854 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:11:45.854 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:45.854 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:45.854 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:45.854 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:45.854 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:45.854 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:45.854 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:45.854 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:45.854 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:45.854 10:50:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:45.854 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:45.854 10:50:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:45.854 10:50:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:45.854 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1992318 00:11:45.854 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1992318 00:11:45.854 10:50:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:45.854 10:50:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1992318 ']' 00:11:45.854 10:50:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.854 10:50:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:45.854 10:50:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.854 10:50:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:45.854 10:50:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:45.854 [2024-07-12 10:50:02.741826] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:45.854 [2024-07-12 10:50:02.741891] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:45.854 EAL: No free 2048 kB hugepages reported on node 1 00:11:45.854 [2024-07-12 10:50:02.828814] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.116 [2024-07-12 10:50:02.920983] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:46.116 [2024-07-12 10:50:02.921042] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:46.116 [2024-07-12 10:50:02.921051] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:46.116 [2024-07-12 10:50:02.921058] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:46.116 [2024-07-12 10:50:02.921064] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:46.116 [2024-07-12 10:50:02.921091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.688 10:50:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:46.688 10:50:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:46.688 10:50:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:46.688 10:50:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:46.688 10:50:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:46.688 10:50:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:46.688 10:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:46.949 [2024-07-12 10:50:03.708849] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:46.949 10:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:46.949 10:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:46.949 10:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:46.949 Malloc1 00:11:47.210 10:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:47.210 Malloc2 00:11:47.210 10:50:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:47.472 10:50:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:47.733 10:50:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:47.733 [2024-07-12 10:50:04.657162] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:47.733 10:50:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:47.733 10:50:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c8044d8c-3e8a-42f6-882b-bf22eeaae56f -a 10.0.0.2 -s 4420 -i 4 00:11:47.993 10:50:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:47.993 10:50:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:47.993 10:50:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:47.993 10:50:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:47.993 10:50:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:49.909 10:50:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:49.909 10:50:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:49.909 10:50:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:49.909 10:50:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:49.909 10:50:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:49.909 10:50:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:49.909 10:50:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:49.909 10:50:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:49.909 10:50:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:49.909 10:50:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:49.909 10:50:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:49.909 10:50:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:49.909 10:50:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:50.171 [ 0]:0x1 00:11:50.171 10:50:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:50.171 10:50:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:50.171 10:50:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=eb4eb7f6b0df4b118f57028d844c359b 00:11:50.171 10:50:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ eb4eb7f6b0df4b118f57028d844c359b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:50.171 10:50:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:50.171 10:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:50.171 10:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:50.171 10:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:50.171 [ 0]:0x1 00:11:50.171 10:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:50.171 10:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:50.432 10:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=eb4eb7f6b0df4b118f57028d844c359b 00:11:50.432 10:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ eb4eb7f6b0df4b118f57028d844c359b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:50.432 10:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:50.432 10:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:50.432 10:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:50.432 [ 1]:0x2 00:11:50.432 10:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:50.432 10:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:50.432 10:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b52d8a512ae54224aeb46ff18928b61d 00:11:50.432 10:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b52d8a512ae54224aeb46ff18928b61d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:50.432 10:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:50.432 10:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:50.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.694 10:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:50.694 10:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:50.954 10:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:50.954 10:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c8044d8c-3e8a-42f6-882b-bf22eeaae56f -a 10.0.0.2 -s 4420 -i 4 00:11:51.216 10:50:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:51.216 10:50:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:51.216 10:50:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:51.216 10:50:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:11:51.216 10:50:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:11:51.216 10:50:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:53.130 10:50:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:53.131 10:50:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:53.131 10:50:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:53.131 10:50:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:53.131 10:50:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:53.131 10:50:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:53.131 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:53.131 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:53.131 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:53.131 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:53.131 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:53.131 10:50:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:53.131 10:50:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:53.131 10:50:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:53.131 10:50:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:53.131 10:50:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:53.131 10:50:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:53.131 10:50:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:53.131 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:53.131 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:53.131 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:53.131 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:53.392 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:53.392 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:53.392 10:50:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:53.392 10:50:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:53.392 10:50:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:53.392 10:50:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:53.392 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:53.392 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:53.392 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:53.392 [ 0]:0x2 00:11:53.392 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:53.392 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:53.392 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b52d8a512ae54224aeb46ff18928b61d 00:11:53.392 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b52d8a512ae54224aeb46ff18928b61d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:53.392 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:53.392 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:53.392 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:53.392 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:53.392 [ 0]:0x1 00:11:53.392 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:53.392 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:53.653 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=eb4eb7f6b0df4b118f57028d844c359b 00:11:53.653 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ eb4eb7f6b0df4b118f57028d844c359b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:53.653 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:53.653 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:53.653 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:53.653 [ 1]:0x2 00:11:53.653 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:53.653 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:53.653 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b52d8a512ae54224aeb46ff18928b61d 00:11:53.653 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b52d8a512ae54224aeb46ff18928b61d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:53.653 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:53.653 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:53.653 10:50:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:53.653 10:50:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:53.653 10:50:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:53.653 10:50:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:53.653 10:50:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:53.653 10:50:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:53.653 10:50:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:53.653 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:53.653 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:53.914 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:53.914 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:53.914 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:53.914 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:53.914 10:50:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:53.914 10:50:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:53.914 10:50:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:53.914 10:50:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:53.914 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:53.914 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:53.914 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:53.914 [ 0]:0x2 00:11:53.914 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:53.914 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:53.914 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b52d8a512ae54224aeb46ff18928b61d 00:11:53.914 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b52d8a512ae54224aeb46ff18928b61d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:53.914 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:53.914 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:53.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.914 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:54.174 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:54.174 10:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c8044d8c-3e8a-42f6-882b-bf22eeaae56f -a 10.0.0.2 -s 4420 -i 4 00:11:54.174 10:50:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:54.174 10:50:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:54.174 10:50:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:54.174 10:50:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:54.174 10:50:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:54.174 10:50:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:56.719 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:56.719 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:56.719 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:56.719 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:56.720 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:56.720 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:56.720 10:50:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:56.720 10:50:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:56.720 10:50:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:56.720 10:50:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:56.720 10:50:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:56.720 10:50:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:56.720 10:50:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:56.720 [ 0]:0x1 00:11:56.720 10:50:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:56.720 10:50:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:56.720 10:50:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=eb4eb7f6b0df4b118f57028d844c359b 00:11:56.720 10:50:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ eb4eb7f6b0df4b118f57028d844c359b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:56.720 10:50:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:56.720 10:50:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:56.720 10:50:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:56.720 [ 1]:0x2 00:11:56.720 10:50:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:56.720 10:50:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:56.720 10:50:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b52d8a512ae54224aeb46ff18928b61d 00:11:56.720 10:50:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b52d8a512ae54224aeb46ff18928b61d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:56.720 10:50:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:56.720 10:50:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:56.720 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:56.720 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:56.720 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:56.720 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:56.720 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:56.720 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:56.720 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:56.720 10:50:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:56.720 10:50:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:56.720 10:50:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:56.720 10:50:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:56.980 10:50:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:56.980 10:50:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:56.980 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:56.980 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:56.980 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:56.980 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:56.980 10:50:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:56.980 10:50:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:56.980 10:50:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:56.980 [ 0]:0x2 00:11:56.980 10:50:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:56.980 10:50:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:56.980 10:50:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b52d8a512ae54224aeb46ff18928b61d 00:11:56.980 10:50:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b52d8a512ae54224aeb46ff18928b61d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:56.980 10:50:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:56.980 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:56.980 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:56.980 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:56.980 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:56.980 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:56.980 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:56.980 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:56.980 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:56.980 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:56.980 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:56.980 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:56.980 [2024-07-12 10:50:13.958648] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:56.980 request: 00:11:56.980 { 00:11:56.980 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:56.980 "nsid": 2, 00:11:56.980 "host": "nqn.2016-06.io.spdk:host1", 00:11:56.980 "method": "nvmf_ns_remove_host", 00:11:56.980 "req_id": 1 00:11:56.980 } 00:11:56.980 Got JSON-RPC error response 00:11:56.980 response: 00:11:56.980 { 00:11:56.980 "code": -32602, 00:11:56.980 "message": "Invalid parameters" 00:11:56.980 } 00:11:57.243 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:57.243 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:57.243 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:57.243 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:57.243 10:50:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:57.243 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:57.243 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:57.243 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:57.243 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:57.243 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:57.243 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:57.243 10:50:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:57.243 10:50:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:57.243 10:50:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:57.243 10:50:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:57.243 10:50:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:57.243 10:50:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:57.243 10:50:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:57.243 10:50:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:57.243 10:50:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:57.243 10:50:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:57.243 10:50:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:57.243 10:50:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:57.243 10:50:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:57.243 10:50:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:57.243 [ 0]:0x2 00:11:57.243 10:50:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:57.243 10:50:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:57.243 10:50:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b52d8a512ae54224aeb46ff18928b61d 00:11:57.243 10:50:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b52d8a512ae54224aeb46ff18928b61d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:57.243 10:50:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:57.243 10:50:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:57.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.504 10:50:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1994765 00:11:57.504 10:50:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:57.504 10:50:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:57.504 10:50:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1994765 /var/tmp/host.sock 00:11:57.504 10:50:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1994765 ']' 00:11:57.504 10:50:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:11:57.504 10:50:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:57.504 10:50:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:57.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:57.505 10:50:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:57.505 10:50:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:57.505 [2024-07-12 10:50:14.357510] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:57.505 [2024-07-12 10:50:14.357566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1994765 ] 00:11:57.505 EAL: No free 2048 kB hugepages reported on node 1 00:11:57.505 [2024-07-12 10:50:14.435632] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.766 [2024-07-12 10:50:14.504081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.338 10:50:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:58.338 10:50:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:58.338 10:50:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:58.338 10:50:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:58.599 10:50:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid dacf733b-57f0-4c29-9d63-b17fbf180e0e 00:11:58.599 10:50:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:58.599 10:50:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g DACF733B57F04C299D63B17FBF180E0E -i 00:11:58.860 10:50:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid f98efe71-2f5b-4d1d-83e5-470771615c8f 00:11:58.860 10:50:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:58.860 10:50:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g F98EFE712F5B4D1D83E5470771615C8F -i 00:11:58.860 10:50:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:59.121 10:50:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:59.381 10:50:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:59.381 10:50:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:59.641 nvme0n1 00:11:59.641 10:50:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:59.641 10:50:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:59.901 nvme1n2 00:12:00.161 10:50:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:00.161 10:50:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:00.161 10:50:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:00.161 10:50:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:00.161 10:50:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:00.161 10:50:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:00.161 10:50:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:00.161 10:50:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:00.161 10:50:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:00.422 10:50:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ dacf733b-57f0-4c29-9d63-b17fbf180e0e == \d\a\c\f\7\3\3\b\-\5\7\f\0\-\4\c\2\9\-\9\d\6\3\-\b\1\7\f\b\f\1\8\0\e\0\e ]] 00:12:00.422 10:50:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:00.422 10:50:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:00.422 10:50:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:00.682 10:50:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ f98efe71-2f5b-4d1d-83e5-470771615c8f == \f\9\8\e\f\e\7\1\-\2\f\5\b\-\4\d\1\d\-\8\3\e\5\-\4\7\0\7\7\1\6\1\5\c\8\f ]] 00:12:00.682 10:50:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1994765 00:12:00.682 10:50:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1994765 ']' 00:12:00.682 10:50:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1994765 00:12:00.682 10:50:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:00.682 10:50:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:00.682 10:50:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1994765 00:12:00.682 10:50:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:00.682 10:50:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:00.682 10:50:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1994765' 00:12:00.682 killing process with pid 1994765 00:12:00.682 10:50:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1994765 00:12:00.682 10:50:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1994765 00:12:00.943 10:50:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:00.943 10:50:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:12:00.943 10:50:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:12:00.943 10:50:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:00.943 10:50:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:12:00.943 10:50:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:00.943 10:50:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:12:00.943 10:50:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:00.943 10:50:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:00.943 rmmod nvme_tcp 00:12:00.943 rmmod nvme_fabrics 00:12:00.943 rmmod nvme_keyring 00:12:00.943 10:50:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:00.943 10:50:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:12:00.943 10:50:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:12:00.943 10:50:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1992318 ']' 00:12:00.943 10:50:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1992318 00:12:00.943 10:50:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1992318 ']' 00:12:00.943 10:50:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1992318 00:12:00.943 10:50:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:00.943 10:50:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:00.943 10:50:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1992318 00:12:01.203 10:50:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:01.203 10:50:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:01.203 10:50:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1992318' 00:12:01.203 killing process with pid 1992318 00:12:01.203 10:50:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1992318 00:12:01.203 10:50:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1992318 00:12:01.204 10:50:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:01.204 10:50:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:01.204 10:50:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:01.204 10:50:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:01.204 10:50:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:01.204 10:50:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.204 10:50:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:01.204 10:50:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.835 10:50:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:03.835 00:12:03.835 real 0m24.876s 00:12:03.835 user 0m25.176s 00:12:03.835 sys 0m7.503s 00:12:03.835 10:50:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:03.835 10:50:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:03.835 ************************************ 00:12:03.835 END TEST nvmf_ns_masking 00:12:03.835 ************************************ 00:12:03.835 10:50:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:03.835 10:50:20 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:12:03.835 10:50:20 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:03.835 10:50:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:03.836 10:50:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:03.836 10:50:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:03.836 ************************************ 00:12:03.836 START TEST nvmf_nvme_cli 00:12:03.836 ************************************ 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:03.836 * Looking for test storage... 00:12:03.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:12:03.836 10:50:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:10.416 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:10.416 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:10.416 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.416 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:10.416 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:10.417 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.417 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:10.417 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:12:10.417 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:10.417 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:10.417 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:10.417 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:10.417 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:10.417 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:10.417 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:10.417 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:10.417 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:10.417 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:10.417 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:10.417 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:10.417 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:10.417 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:10.417 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:10.417 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:10.677 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:10.677 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:10.677 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:10.677 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:10.677 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:10.677 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:10.677 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:10.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:10.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.575 ms 00:12:10.937 00:12:10.937 --- 10.0.0.2 ping statistics --- 00:12:10.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.937 rtt min/avg/max/mdev = 0.575/0.575/0.575/0.000 ms 00:12:10.937 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:10.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:10.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:12:10.937 00:12:10.937 --- 10.0.0.1 ping statistics --- 00:12:10.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.937 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:12:10.937 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:10.937 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:12:10.937 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:10.937 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:10.938 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:10.938 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:10.938 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:10.938 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:10.938 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:10.938 10:50:27 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:10.938 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:10.938 10:50:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:10.938 10:50:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:10.938 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1999567 00:12:10.938 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1999567 00:12:10.938 10:50:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:10.938 10:50:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 1999567 ']' 00:12:10.938 10:50:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.938 10:50:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:10.938 10:50:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.938 10:50:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:10.938 10:50:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:10.938 [2024-07-12 10:50:27.782650] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:10.938 [2024-07-12 10:50:27.782718] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:10.938 EAL: No free 2048 kB hugepages reported on node 1 00:12:10.938 [2024-07-12 10:50:27.874493] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:11.198 [2024-07-12 10:50:27.973649] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:11.198 [2024-07-12 10:50:27.973714] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:11.198 [2024-07-12 10:50:27.973723] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:11.198 [2024-07-12 10:50:27.973729] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:11.198 [2024-07-12 10:50:27.973735] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:11.198 [2024-07-12 10:50:27.973810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.198 [2024-07-12 10:50:27.973951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.198 [2024-07-12 10:50:27.974111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.198 [2024-07-12 10:50:27.974112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:11.769 [2024-07-12 10:50:28.633349] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:11.769 Malloc0 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:11.769 Malloc1 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:11.769 [2024-07-12 10:50:28.735285] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.769 10:50:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:12:12.030 00:12:12.030 Discovery Log Number of Records 2, Generation counter 2 00:12:12.030 =====Discovery Log Entry 0====== 00:12:12.030 trtype: tcp 00:12:12.030 adrfam: ipv4 00:12:12.030 subtype: current discovery subsystem 00:12:12.030 treq: not required 00:12:12.030 portid: 0 00:12:12.030 trsvcid: 4420 00:12:12.030 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:12.030 traddr: 10.0.0.2 00:12:12.030 eflags: explicit discovery connections, duplicate discovery information 00:12:12.030 sectype: none 00:12:12.030 =====Discovery Log Entry 1====== 00:12:12.030 trtype: tcp 00:12:12.030 adrfam: ipv4 00:12:12.030 subtype: nvme subsystem 00:12:12.030 treq: not required 00:12:12.030 portid: 0 00:12:12.030 trsvcid: 4420 00:12:12.030 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:12.030 traddr: 10.0.0.2 00:12:12.030 eflags: none 00:12:12.030 sectype: none 00:12:12.030 10:50:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:12.030 10:50:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:12.030 10:50:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:12.030 10:50:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:12.030 10:50:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:12.030 10:50:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:12.030 10:50:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:12.030 10:50:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:12.030 10:50:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:12.030 10:50:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:12.030 10:50:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:13.946 10:50:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:13.946 10:50:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:12:13.946 10:50:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:13.946 10:50:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:13.946 10:50:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:13.946 10:50:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:12:15.858 10:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:15.858 10:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:15.858 10:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:15.858 10:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:15.858 10:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:15.859 10:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:12:15.859 10:50:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:15.859 10:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:15.859 10:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:15.859 10:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:15.859 10:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:15.859 10:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:15.859 10:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:15.859 10:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:15.859 10:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:15.859 10:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:15.859 10:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:15.859 10:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:15.859 10:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:15.859 10:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:15.859 10:50:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:15.859 /dev/nvme0n1 ]] 00:12:15.859 10:50:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:15.859 10:50:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:15.859 10:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:15.859 10:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:15.859 10:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:15.859 10:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:15.859 10:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:15.859 10:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:15.859 10:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:15.859 10:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:15.859 10:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:15.859 10:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:15.859 10:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:15.859 10:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:15.859 10:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:15.859 10:50:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:15.859 10:50:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:16.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.120 10:50:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:16.120 10:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:12:16.120 10:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:16.120 10:50:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:16.120 10:50:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:16.120 10:50:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:16.120 10:50:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:12:16.120 10:50:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:16.120 10:50:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.120 10:50:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.120 10:50:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:16.120 10:50:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.120 10:50:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:16.120 10:50:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:16.120 10:50:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:16.120 10:50:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:16.120 10:50:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:16.120 10:50:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:16.120 10:50:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:16.120 10:50:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:16.120 rmmod nvme_tcp 00:12:16.120 rmmod nvme_fabrics 00:12:16.120 rmmod nvme_keyring 00:12:16.120 10:50:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:16.379 10:50:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:16.379 10:50:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:16.379 10:50:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1999567 ']' 00:12:16.379 10:50:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1999567 00:12:16.379 10:50:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 1999567 ']' 00:12:16.379 10:50:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 1999567 00:12:16.379 10:50:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:12:16.379 10:50:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:16.379 10:50:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1999567 00:12:16.379 10:50:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:16.379 10:50:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:16.379 10:50:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1999567' 00:12:16.379 killing process with pid 1999567 00:12:16.379 10:50:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 1999567 00:12:16.379 10:50:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 1999567 00:12:16.380 10:50:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:16.380 10:50:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:16.380 10:50:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:16.380 10:50:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:16.380 10:50:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:16.380 10:50:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.380 10:50:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:16.380 10:50:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.925 10:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:18.925 00:12:18.925 real 0m15.125s 00:12:18.925 user 0m23.302s 00:12:18.925 sys 0m6.208s 00:12:18.925 10:50:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:18.925 10:50:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:18.925 ************************************ 00:12:18.925 END TEST nvmf_nvme_cli 00:12:18.925 ************************************ 00:12:18.925 10:50:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:18.925 10:50:35 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:18.925 10:50:35 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:18.925 10:50:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:18.925 10:50:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:18.925 10:50:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:18.925 ************************************ 00:12:18.925 START TEST nvmf_vfio_user 00:12:18.925 ************************************ 00:12:18.925 10:50:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:18.925 * Looking for test storage... 00:12:18.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:18.925 10:50:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:18.925 10:50:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:18.925 10:50:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:18.925 10:50:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:18.925 10:50:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:18.925 10:50:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:18.925 10:50:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:18.925 10:50:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:18.925 10:50:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:18.925 10:50:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:18.925 10:50:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:18.925 10:50:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:18.925 10:50:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:18.925 10:50:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:18.925 10:50:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:18.925 10:50:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:18.925 10:50:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:18.925 10:50:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:18.925 10:50:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:18.925 10:50:35 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:18.925 10:50:35 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:18.925 10:50:35 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:18.925 10:50:35 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.925 10:50:35 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.926 10:50:35 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.926 10:50:35 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:18.926 10:50:35 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.926 10:50:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:12:18.926 10:50:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:18.926 10:50:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:18.926 10:50:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:18.926 10:50:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:18.926 10:50:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:18.926 10:50:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:18.926 10:50:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:18.926 10:50:35 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:18.926 10:50:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:18.926 10:50:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:18.926 10:50:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:18.926 10:50:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:18.926 10:50:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:18.926 10:50:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:18.926 10:50:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:18.926 10:50:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:18.926 10:50:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:18.926 10:50:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:18.926 10:50:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2001350 00:12:18.926 10:50:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2001350' 00:12:18.926 Process pid: 2001350 00:12:18.926 10:50:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:18.926 10:50:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2001350 00:12:18.926 10:50:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:18.926 10:50:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2001350 ']' 00:12:18.926 10:50:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.926 10:50:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:18.926 10:50:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.926 10:50:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:18.926 10:50:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:18.926 [2024-07-12 10:50:35.646688] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:18.926 [2024-07-12 10:50:35.646756] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.926 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.926 [2024-07-12 10:50:35.728874] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:18.926 [2024-07-12 10:50:35.799106] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:18.926 [2024-07-12 10:50:35.799154] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:18.926 [2024-07-12 10:50:35.799160] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:18.926 [2024-07-12 10:50:35.799165] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:18.926 [2024-07-12 10:50:35.799169] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:18.926 [2024-07-12 10:50:35.799313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.926 [2024-07-12 10:50:35.799526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:18.926 [2024-07-12 10:50:35.799681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.926 [2024-07-12 10:50:35.799681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:19.496 10:50:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:19.496 10:50:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:19.496 10:50:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:20.878 10:50:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:20.878 10:50:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:20.878 10:50:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:20.878 10:50:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:20.878 10:50:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:20.878 10:50:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:20.878 Malloc1 00:12:20.878 10:50:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:21.139 10:50:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:21.139 10:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:21.398 10:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:21.398 10:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:21.398 10:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:21.658 Malloc2 00:12:21.658 10:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:21.658 10:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:21.918 10:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:22.182 10:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:22.182 10:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:22.182 10:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:22.182 10:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:22.182 10:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:22.182 10:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:22.182 [2024-07-12 10:50:39.001475] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:22.182 [2024-07-12 10:50:39.001516] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2002042 ] 00:12:22.182 EAL: No free 2048 kB hugepages reported on node 1 00:12:22.182 [2024-07-12 10:50:39.031243] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:22.182 [2024-07-12 10:50:39.041411] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:22.182 [2024-07-12 10:50:39.041427] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f7defa97000 00:12:22.182 [2024-07-12 10:50:39.042412] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:22.182 [2024-07-12 10:50:39.043412] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:22.182 [2024-07-12 10:50:39.044422] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:22.182 [2024-07-12 10:50:39.045432] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:22.182 [2024-07-12 10:50:39.046443] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:22.182 [2024-07-12 10:50:39.047446] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:22.182 [2024-07-12 10:50:39.048448] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:22.182 [2024-07-12 10:50:39.049450] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:22.182 [2024-07-12 10:50:39.050468] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:22.182 [2024-07-12 10:50:39.050475] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f7defa8c000 00:12:22.182 [2024-07-12 10:50:39.051388] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:22.182 [2024-07-12 10:50:39.059837] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:22.182 [2024-07-12 10:50:39.059857] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:22.182 [2024-07-12 10:50:39.064546] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:22.182 [2024-07-12 10:50:39.064586] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:22.182 [2024-07-12 10:50:39.064648] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:22.182 [2024-07-12 10:50:39.064663] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:22.182 [2024-07-12 10:50:39.064667] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:22.182 [2024-07-12 10:50:39.065539] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:22.182 [2024-07-12 10:50:39.065547] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:22.182 [2024-07-12 10:50:39.065551] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:22.183 [2024-07-12 10:50:39.066545] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:22.183 [2024-07-12 10:50:39.066552] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:22.183 [2024-07-12 10:50:39.066557] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:22.183 [2024-07-12 10:50:39.067552] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:22.183 [2024-07-12 10:50:39.067558] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:22.183 [2024-07-12 10:50:39.068555] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:22.183 [2024-07-12 10:50:39.068561] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:22.183 [2024-07-12 10:50:39.068565] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:22.183 [2024-07-12 10:50:39.068569] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:22.183 [2024-07-12 10:50:39.068674] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:22.183 [2024-07-12 10:50:39.068677] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:22.183 [2024-07-12 10:50:39.068681] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:22.183 [2024-07-12 10:50:39.069560] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:22.183 [2024-07-12 10:50:39.070566] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:22.183 [2024-07-12 10:50:39.071580] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:22.183 [2024-07-12 10:50:39.072578] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:22.183 [2024-07-12 10:50:39.072637] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:22.183 [2024-07-12 10:50:39.073586] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:22.183 [2024-07-12 10:50:39.073594] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:22.183 [2024-07-12 10:50:39.073597] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:22.183 [2024-07-12 10:50:39.073612] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:22.183 [2024-07-12 10:50:39.073620] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:22.183 [2024-07-12 10:50:39.073632] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:22.183 [2024-07-12 10:50:39.073636] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:22.183 [2024-07-12 10:50:39.073648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:22.183 [2024-07-12 10:50:39.073683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:22.183 [2024-07-12 10:50:39.073690] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:22.183 [2024-07-12 10:50:39.073695] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:22.183 [2024-07-12 10:50:39.073698] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:22.183 [2024-07-12 10:50:39.073702] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:22.183 [2024-07-12 10:50:39.073705] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:22.183 [2024-07-12 10:50:39.073708] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:22.183 [2024-07-12 10:50:39.073711] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:22.183 [2024-07-12 10:50:39.073717] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:22.183 [2024-07-12 10:50:39.073725] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:22.183 [2024-07-12 10:50:39.073737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:22.183 [2024-07-12 10:50:39.073748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.183 [2024-07-12 10:50:39.073754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.183 [2024-07-12 10:50:39.073760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.183 [2024-07-12 10:50:39.073766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.183 [2024-07-12 10:50:39.073769] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:22.183 [2024-07-12 10:50:39.073775] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:22.183 [2024-07-12 10:50:39.073782] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:22.183 [2024-07-12 10:50:39.073794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:22.183 [2024-07-12 10:50:39.073798] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:22.183 [2024-07-12 10:50:39.073801] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:22.183 [2024-07-12 10:50:39.073806] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:22.183 [2024-07-12 10:50:39.073810] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:22.183 [2024-07-12 10:50:39.073817] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:22.183 [2024-07-12 10:50:39.073825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:22.183 [2024-07-12 10:50:39.073866] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:22.183 [2024-07-12 10:50:39.073871] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:22.183 [2024-07-12 10:50:39.073877] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:22.183 [2024-07-12 10:50:39.073880] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:22.183 [2024-07-12 10:50:39.073884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:22.183 [2024-07-12 10:50:39.073894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:22.183 [2024-07-12 10:50:39.073901] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:22.183 [2024-07-12 10:50:39.073907] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:22.183 [2024-07-12 10:50:39.073912] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:22.183 [2024-07-12 10:50:39.073917] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:22.183 [2024-07-12 10:50:39.073920] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:22.183 [2024-07-12 10:50:39.073924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:22.183 [2024-07-12 10:50:39.073944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:22.183 [2024-07-12 10:50:39.073953] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:22.183 [2024-07-12 10:50:39.073959] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:22.183 [2024-07-12 10:50:39.073963] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:22.183 [2024-07-12 10:50:39.073966] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:22.183 [2024-07-12 10:50:39.073971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:22.183 [2024-07-12 10:50:39.073983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:22.183 [2024-07-12 10:50:39.073990] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:22.183 [2024-07-12 10:50:39.073994] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:22.183 [2024-07-12 10:50:39.074000] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:22.183 [2024-07-12 10:50:39.074004] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:12:22.183 [2024-07-12 10:50:39.074007] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:22.183 [2024-07-12 10:50:39.074011] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:22.183 [2024-07-12 10:50:39.074015] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:22.183 [2024-07-12 10:50:39.074018] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:22.183 [2024-07-12 10:50:39.074021] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:22.183 [2024-07-12 10:50:39.074036] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:22.183 [2024-07-12 10:50:39.074046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:22.183 [2024-07-12 10:50:39.074053] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:22.183 [2024-07-12 10:50:39.074060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:22.183 [2024-07-12 10:50:39.074068] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:22.183 [2024-07-12 10:50:39.074081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:22.183 [2024-07-12 10:50:39.074088] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:22.184 [2024-07-12 10:50:39.074100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:22.184 [2024-07-12 10:50:39.074108] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:22.184 [2024-07-12 10:50:39.074112] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:22.184 [2024-07-12 10:50:39.074114] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:22.184 [2024-07-12 10:50:39.074116] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:22.184 [2024-07-12 10:50:39.074121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:22.184 [2024-07-12 10:50:39.074131] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:22.184 [2024-07-12 10:50:39.074134] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:22.184 [2024-07-12 10:50:39.074138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:22.184 [2024-07-12 10:50:39.074143] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:22.184 [2024-07-12 10:50:39.074147] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:22.184 [2024-07-12 10:50:39.074151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:22.184 [2024-07-12 10:50:39.074157] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:22.184 [2024-07-12 10:50:39.074160] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:22.184 [2024-07-12 10:50:39.074165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:22.184 [2024-07-12 10:50:39.074169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:22.184 [2024-07-12 10:50:39.074178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:22.184 [2024-07-12 10:50:39.074185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:22.184 [2024-07-12 10:50:39.074190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:22.184 ===================================================== 00:12:22.184 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:22.184 ===================================================== 00:12:22.184 Controller Capabilities/Features 00:12:22.184 ================================ 00:12:22.184 Vendor ID: 4e58 00:12:22.184 Subsystem Vendor ID: 4e58 00:12:22.184 Serial Number: SPDK1 00:12:22.184 Model Number: SPDK bdev Controller 00:12:22.184 Firmware Version: 24.09 00:12:22.184 Recommended Arb Burst: 6 00:12:22.184 IEEE OUI Identifier: 8d 6b 50 00:12:22.184 Multi-path I/O 00:12:22.184 May have multiple subsystem ports: Yes 00:12:22.184 May have multiple controllers: Yes 00:12:22.184 Associated with SR-IOV VF: No 00:12:22.184 Max Data Transfer Size: 131072 00:12:22.184 Max Number of Namespaces: 32 00:12:22.184 Max Number of I/O Queues: 127 00:12:22.184 NVMe Specification Version (VS): 1.3 00:12:22.184 NVMe Specification Version (Identify): 1.3 00:12:22.184 Maximum Queue Entries: 256 00:12:22.184 Contiguous Queues Required: Yes 00:12:22.184 Arbitration Mechanisms Supported 00:12:22.184 Weighted Round Robin: Not Supported 00:12:22.184 Vendor Specific: Not Supported 00:12:22.184 Reset Timeout: 15000 ms 00:12:22.184 Doorbell Stride: 4 bytes 00:12:22.184 NVM Subsystem Reset: Not Supported 00:12:22.184 Command Sets Supported 00:12:22.184 NVM Command Set: Supported 00:12:22.184 Boot Partition: Not Supported 00:12:22.184 Memory Page Size Minimum: 4096 bytes 00:12:22.184 Memory Page Size Maximum: 4096 bytes 00:12:22.184 Persistent Memory Region: Not Supported 00:12:22.184 Optional Asynchronous Events Supported 00:12:22.184 Namespace Attribute Notices: Supported 00:12:22.184 Firmware Activation Notices: Not Supported 00:12:22.184 ANA Change Notices: Not Supported 00:12:22.184 PLE Aggregate Log Change Notices: Not Supported 00:12:22.184 LBA Status Info Alert Notices: Not Supported 00:12:22.184 EGE Aggregate Log Change Notices: Not Supported 00:12:22.184 Normal NVM Subsystem Shutdown event: Not Supported 00:12:22.184 Zone Descriptor Change Notices: Not Supported 00:12:22.184 Discovery Log Change Notices: Not Supported 00:12:22.184 Controller Attributes 00:12:22.184 128-bit Host Identifier: Supported 00:12:22.184 Non-Operational Permissive Mode: Not Supported 00:12:22.184 NVM Sets: Not Supported 00:12:22.184 Read Recovery Levels: Not Supported 00:12:22.184 Endurance Groups: Not Supported 00:12:22.184 Predictable Latency Mode: Not Supported 00:12:22.184 Traffic Based Keep ALive: Not Supported 00:12:22.184 Namespace Granularity: Not Supported 00:12:22.184 SQ Associations: Not Supported 00:12:22.184 UUID List: Not Supported 00:12:22.184 Multi-Domain Subsystem: Not Supported 00:12:22.184 Fixed Capacity Management: Not Supported 00:12:22.184 Variable Capacity Management: Not Supported 00:12:22.184 Delete Endurance Group: Not Supported 00:12:22.184 Delete NVM Set: Not Supported 00:12:22.184 Extended LBA Formats Supported: Not Supported 00:12:22.184 Flexible Data Placement Supported: Not Supported 00:12:22.184 00:12:22.184 Controller Memory Buffer Support 00:12:22.184 ================================ 00:12:22.184 Supported: No 00:12:22.184 00:12:22.184 Persistent Memory Region Support 00:12:22.184 ================================ 00:12:22.184 Supported: No 00:12:22.184 00:12:22.184 Admin Command Set Attributes 00:12:22.184 ============================ 00:12:22.184 Security Send/Receive: Not Supported 00:12:22.184 Format NVM: Not Supported 00:12:22.184 Firmware Activate/Download: Not Supported 00:12:22.184 Namespace Management: Not Supported 00:12:22.184 Device Self-Test: Not Supported 00:12:22.184 Directives: Not Supported 00:12:22.184 NVMe-MI: Not Supported 00:12:22.184 Virtualization Management: Not Supported 00:12:22.184 Doorbell Buffer Config: Not Supported 00:12:22.184 Get LBA Status Capability: Not Supported 00:12:22.184 Command & Feature Lockdown Capability: Not Supported 00:12:22.184 Abort Command Limit: 4 00:12:22.184 Async Event Request Limit: 4 00:12:22.184 Number of Firmware Slots: N/A 00:12:22.184 Firmware Slot 1 Read-Only: N/A 00:12:22.184 Firmware Activation Without Reset: N/A 00:12:22.184 Multiple Update Detection Support: N/A 00:12:22.184 Firmware Update Granularity: No Information Provided 00:12:22.184 Per-Namespace SMART Log: No 00:12:22.184 Asymmetric Namespace Access Log Page: Not Supported 00:12:22.184 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:22.184 Command Effects Log Page: Supported 00:12:22.184 Get Log Page Extended Data: Supported 00:12:22.184 Telemetry Log Pages: Not Supported 00:12:22.184 Persistent Event Log Pages: Not Supported 00:12:22.184 Supported Log Pages Log Page: May Support 00:12:22.184 Commands Supported & Effects Log Page: Not Supported 00:12:22.184 Feature Identifiers & Effects Log Page:May Support 00:12:22.184 NVMe-MI Commands & Effects Log Page: May Support 00:12:22.184 Data Area 4 for Telemetry Log: Not Supported 00:12:22.184 Error Log Page Entries Supported: 128 00:12:22.184 Keep Alive: Supported 00:12:22.184 Keep Alive Granularity: 10000 ms 00:12:22.184 00:12:22.184 NVM Command Set Attributes 00:12:22.184 ========================== 00:12:22.184 Submission Queue Entry Size 00:12:22.184 Max: 64 00:12:22.184 Min: 64 00:12:22.184 Completion Queue Entry Size 00:12:22.184 Max: 16 00:12:22.184 Min: 16 00:12:22.184 Number of Namespaces: 32 00:12:22.184 Compare Command: Supported 00:12:22.184 Write Uncorrectable Command: Not Supported 00:12:22.184 Dataset Management Command: Supported 00:12:22.184 Write Zeroes Command: Supported 00:12:22.184 Set Features Save Field: Not Supported 00:12:22.184 Reservations: Not Supported 00:12:22.184 Timestamp: Not Supported 00:12:22.184 Copy: Supported 00:12:22.184 Volatile Write Cache: Present 00:12:22.184 Atomic Write Unit (Normal): 1 00:12:22.184 Atomic Write Unit (PFail): 1 00:12:22.184 Atomic Compare & Write Unit: 1 00:12:22.184 Fused Compare & Write: Supported 00:12:22.184 Scatter-Gather List 00:12:22.184 SGL Command Set: Supported (Dword aligned) 00:12:22.184 SGL Keyed: Not Supported 00:12:22.184 SGL Bit Bucket Descriptor: Not Supported 00:12:22.184 SGL Metadata Pointer: Not Supported 00:12:22.184 Oversized SGL: Not Supported 00:12:22.184 SGL Metadata Address: Not Supported 00:12:22.184 SGL Offset: Not Supported 00:12:22.184 Transport SGL Data Block: Not Supported 00:12:22.184 Replay Protected Memory Block: Not Supported 00:12:22.184 00:12:22.184 Firmware Slot Information 00:12:22.184 ========================= 00:12:22.184 Active slot: 1 00:12:22.184 Slot 1 Firmware Revision: 24.09 00:12:22.184 00:12:22.184 00:12:22.184 Commands Supported and Effects 00:12:22.184 ============================== 00:12:22.184 Admin Commands 00:12:22.184 -------------- 00:12:22.184 Get Log Page (02h): Supported 00:12:22.184 Identify (06h): Supported 00:12:22.184 Abort (08h): Supported 00:12:22.184 Set Features (09h): Supported 00:12:22.184 Get Features (0Ah): Supported 00:12:22.184 Asynchronous Event Request (0Ch): Supported 00:12:22.184 Keep Alive (18h): Supported 00:12:22.184 I/O Commands 00:12:22.184 ------------ 00:12:22.184 Flush (00h): Supported LBA-Change 00:12:22.184 Write (01h): Supported LBA-Change 00:12:22.184 Read (02h): Supported 00:12:22.184 Compare (05h): Supported 00:12:22.184 Write Zeroes (08h): Supported LBA-Change 00:12:22.184 Dataset Management (09h): Supported LBA-Change 00:12:22.184 Copy (19h): Supported LBA-Change 00:12:22.184 00:12:22.184 Error Log 00:12:22.185 ========= 00:12:22.185 00:12:22.185 Arbitration 00:12:22.185 =========== 00:12:22.185 Arbitration Burst: 1 00:12:22.185 00:12:22.185 Power Management 00:12:22.185 ================ 00:12:22.185 Number of Power States: 1 00:12:22.185 Current Power State: Power State #0 00:12:22.185 Power State #0: 00:12:22.185 Max Power: 0.00 W 00:12:22.185 Non-Operational State: Operational 00:12:22.185 Entry Latency: Not Reported 00:12:22.185 Exit Latency: Not Reported 00:12:22.185 Relative Read Throughput: 0 00:12:22.185 Relative Read Latency: 0 00:12:22.185 Relative Write Throughput: 0 00:12:22.185 Relative Write Latency: 0 00:12:22.185 Idle Power: Not Reported 00:12:22.185 Active Power: Not Reported 00:12:22.185 Non-Operational Permissive Mode: Not Supported 00:12:22.185 00:12:22.185 Health Information 00:12:22.185 ================== 00:12:22.185 Critical Warnings: 00:12:22.185 Available Spare Space: OK 00:12:22.185 Temperature: OK 00:12:22.185 Device Reliability: OK 00:12:22.185 Read Only: No 00:12:22.185 Volatile Memory Backup: OK 00:12:22.185 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:22.185 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:22.185 Available Spare: 0% 00:12:22.185 Available Sp[2024-07-12 10:50:39.074262] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:22.185 [2024-07-12 10:50:39.074269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:22.185 [2024-07-12 10:50:39.074291] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:22.185 [2024-07-12 10:50:39.074297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.185 [2024-07-12 10:50:39.074302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.185 [2024-07-12 10:50:39.074306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.185 [2024-07-12 10:50:39.074310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.185 [2024-07-12 10:50:39.078129] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:22.185 [2024-07-12 10:50:39.078137] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:22.185 [2024-07-12 10:50:39.078611] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:22.185 [2024-07-12 10:50:39.078650] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:22.185 [2024-07-12 10:50:39.078654] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:22.185 [2024-07-12 10:50:39.079618] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:22.185 [2024-07-12 10:50:39.079625] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:22.185 [2024-07-12 10:50:39.079681] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:22.185 [2024-07-12 10:50:39.080640] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:22.185 are Threshold: 0% 00:12:22.185 Life Percentage Used: 0% 00:12:22.185 Data Units Read: 0 00:12:22.185 Data Units Written: 0 00:12:22.185 Host Read Commands: 0 00:12:22.185 Host Write Commands: 0 00:12:22.185 Controller Busy Time: 0 minutes 00:12:22.185 Power Cycles: 0 00:12:22.185 Power On Hours: 0 hours 00:12:22.185 Unsafe Shutdowns: 0 00:12:22.185 Unrecoverable Media Errors: 0 00:12:22.185 Lifetime Error Log Entries: 0 00:12:22.185 Warning Temperature Time: 0 minutes 00:12:22.185 Critical Temperature Time: 0 minutes 00:12:22.185 00:12:22.185 Number of Queues 00:12:22.185 ================ 00:12:22.185 Number of I/O Submission Queues: 127 00:12:22.185 Number of I/O Completion Queues: 127 00:12:22.185 00:12:22.185 Active Namespaces 00:12:22.185 ================= 00:12:22.185 Namespace ID:1 00:12:22.185 Error Recovery Timeout: Unlimited 00:12:22.185 Command Set Identifier: NVM (00h) 00:12:22.185 Deallocate: Supported 00:12:22.185 Deallocated/Unwritten Error: Not Supported 00:12:22.185 Deallocated Read Value: Unknown 00:12:22.185 Deallocate in Write Zeroes: Not Supported 00:12:22.185 Deallocated Guard Field: 0xFFFF 00:12:22.185 Flush: Supported 00:12:22.185 Reservation: Supported 00:12:22.185 Namespace Sharing Capabilities: Multiple Controllers 00:12:22.185 Size (in LBAs): 131072 (0GiB) 00:12:22.185 Capacity (in LBAs): 131072 (0GiB) 00:12:22.185 Utilization (in LBAs): 131072 (0GiB) 00:12:22.185 NGUID: 3A726C69B64745C69EBFD24F985C28DC 00:12:22.185 UUID: 3a726c69-b647-45c6-9ebf-d24f985c28dc 00:12:22.185 Thin Provisioning: Not Supported 00:12:22.185 Per-NS Atomic Units: Yes 00:12:22.185 Atomic Boundary Size (Normal): 0 00:12:22.185 Atomic Boundary Size (PFail): 0 00:12:22.185 Atomic Boundary Offset: 0 00:12:22.185 Maximum Single Source Range Length: 65535 00:12:22.185 Maximum Copy Length: 65535 00:12:22.185 Maximum Source Range Count: 1 00:12:22.185 NGUID/EUI64 Never Reused: No 00:12:22.185 Namespace Write Protected: No 00:12:22.185 Number of LBA Formats: 1 00:12:22.185 Current LBA Format: LBA Format #00 00:12:22.185 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:22.185 00:12:22.185 10:50:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:22.185 EAL: No free 2048 kB hugepages reported on node 1 00:12:22.446 [2024-07-12 10:50:39.245731] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:27.733 Initializing NVMe Controllers 00:12:27.733 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:27.733 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:27.733 Initialization complete. Launching workers. 00:12:27.733 ======================================================== 00:12:27.733 Latency(us) 00:12:27.733 Device Information : IOPS MiB/s Average min max 00:12:27.733 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39963.80 156.11 3205.62 831.93 7790.88 00:12:27.733 ======================================================== 00:12:27.733 Total : 39963.80 156.11 3205.62 831.93 7790.88 00:12:27.733 00:12:27.733 [2024-07-12 10:50:44.266480] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:27.733 10:50:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:27.733 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.733 [2024-07-12 10:50:44.447330] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:33.020 Initializing NVMe Controllers 00:12:33.020 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:33.020 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:33.020 Initialization complete. Launching workers. 00:12:33.020 ======================================================== 00:12:33.020 Latency(us) 00:12:33.020 Device Information : IOPS MiB/s Average min max 00:12:33.020 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7980.71 5993.36 9970.76 00:12:33.020 ======================================================== 00:12:33.021 Total : 16051.20 62.70 7980.71 5993.36 9970.76 00:12:33.021 00:12:33.021 [2024-07-12 10:50:49.484016] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:33.021 10:50:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:33.021 EAL: No free 2048 kB hugepages reported on node 1 00:12:33.021 [2024-07-12 10:50:49.662787] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:38.308 [2024-07-12 10:50:54.754407] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:38.308 Initializing NVMe Controllers 00:12:38.308 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:38.308 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:38.308 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:38.308 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:38.308 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:38.308 Initialization complete. Launching workers. 00:12:38.308 Starting thread on core 2 00:12:38.308 Starting thread on core 3 00:12:38.308 Starting thread on core 1 00:12:38.308 10:50:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:38.308 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.308 [2024-07-12 10:50:54.983483] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:41.611 [2024-07-12 10:50:58.040256] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:41.611 Initializing NVMe Controllers 00:12:41.611 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:41.611 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:41.611 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:41.611 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:41.611 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:41.611 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:41.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:41.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:41.611 Initialization complete. Launching workers. 00:12:41.611 Starting thread on core 1 with urgent priority queue 00:12:41.611 Starting thread on core 2 with urgent priority queue 00:12:41.611 Starting thread on core 3 with urgent priority queue 00:12:41.611 Starting thread on core 0 with urgent priority queue 00:12:41.611 SPDK bdev Controller (SPDK1 ) core 0: 6324.00 IO/s 15.81 secs/100000 ios 00:12:41.611 SPDK bdev Controller (SPDK1 ) core 1: 5997.67 IO/s 16.67 secs/100000 ios 00:12:41.611 SPDK bdev Controller (SPDK1 ) core 2: 6496.33 IO/s 15.39 secs/100000 ios 00:12:41.611 SPDK bdev Controller (SPDK1 ) core 3: 6763.00 IO/s 14.79 secs/100000 ios 00:12:41.611 ======================================================== 00:12:41.611 00:12:41.611 10:50:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:41.611 EAL: No free 2048 kB hugepages reported on node 1 00:12:41.611 [2024-07-12 10:50:58.261540] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:41.611 Initializing NVMe Controllers 00:12:41.611 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:41.611 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:41.611 Namespace ID: 1 size: 0GB 00:12:41.611 Initialization complete. 00:12:41.611 INFO: using host memory buffer for IO 00:12:41.611 Hello world! 00:12:41.611 [2024-07-12 10:50:58.295751] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:41.611 10:50:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:41.611 EAL: No free 2048 kB hugepages reported on node 1 00:12:41.611 [2024-07-12 10:50:58.518502] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:42.554 Initializing NVMe Controllers 00:12:42.554 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:42.554 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:42.554 Initialization complete. Launching workers. 00:12:42.554 submit (in ns) avg, min, max = 6624.2, 2810.0, 4022879.2 00:12:42.554 complete (in ns) avg, min, max = 15353.6, 1647.5, 4075658.3 00:12:42.554 00:12:42.554 Submit histogram 00:12:42.554 ================ 00:12:42.554 Range in us Cumulative Count 00:12:42.554 2.800 - 2.813: 0.0432% ( 9) 00:12:42.554 2.813 - 2.827: 0.1007% ( 12) 00:12:42.554 2.827 - 2.840: 0.7864% ( 143) 00:12:42.554 2.840 - 2.853: 1.9613% ( 245) 00:12:42.554 2.853 - 2.867: 4.4548% ( 520) 00:12:42.554 2.867 - 2.880: 8.8808% ( 923) 00:12:42.554 2.880 - 2.893: 14.2658% ( 1123) 00:12:42.554 2.893 - 2.907: 19.6893% ( 1131) 00:12:42.554 2.907 - 2.920: 25.9471% ( 1305) 00:12:42.554 2.920 - 2.933: 32.6988% ( 1408) 00:12:42.554 2.933 - 2.947: 38.0838% ( 1123) 00:12:42.554 2.947 - 2.960: 43.8621% ( 1205) 00:12:42.554 2.960 - 2.973: 49.3814% ( 1151) 00:12:42.554 2.973 - 2.987: 56.9819% ( 1585) 00:12:42.554 2.987 - 3.000: 64.4577% ( 1559) 00:12:42.554 3.000 - 3.013: 72.8254% ( 1745) 00:12:42.554 3.013 - 3.027: 80.0038% ( 1497) 00:12:42.554 3.027 - 3.040: 86.6836% ( 1393) 00:12:42.554 3.040 - 3.053: 91.5460% ( 1014) 00:12:42.554 3.053 - 3.067: 95.0417% ( 729) 00:12:42.554 3.067 - 3.080: 97.1085% ( 431) 00:12:42.554 3.080 - 3.093: 98.3744% ( 264) 00:12:42.554 3.093 - 3.107: 98.9163% ( 113) 00:12:42.554 3.107 - 3.120: 99.1992% ( 59) 00:12:42.554 3.120 - 3.133: 99.3910% ( 40) 00:12:42.554 3.133 - 3.147: 99.4725% ( 17) 00:12:42.554 3.147 - 3.160: 99.5061% ( 7) 00:12:42.554 3.160 - 3.173: 99.5301% ( 5) 00:12:42.554 3.173 - 3.187: 99.5397% ( 2) 00:12:42.554 3.200 - 3.213: 99.5445% ( 1) 00:12:42.554 3.240 - 3.253: 99.5492% ( 1) 00:12:42.554 3.253 - 3.267: 99.5540% ( 1) 00:12:42.554 3.280 - 3.293: 99.5588% ( 1) 00:12:42.554 3.293 - 3.307: 99.5636% ( 1) 00:12:42.554 3.360 - 3.373: 99.5684% ( 1) 00:12:42.554 3.373 - 3.387: 99.5732% ( 1) 00:12:42.554 3.400 - 3.413: 99.5780% ( 1) 00:12:42.554 3.413 - 3.440: 99.5876% ( 2) 00:12:42.554 3.467 - 3.493: 99.5924% ( 1) 00:12:42.554 3.493 - 3.520: 99.5972% ( 1) 00:12:42.554 3.547 - 3.573: 99.6020% ( 1) 00:12:42.554 3.707 - 3.733: 99.6068% ( 1) 00:12:42.554 3.813 - 3.840: 99.6116% ( 1) 00:12:42.554 3.867 - 3.893: 99.6164% ( 1) 00:12:42.554 3.947 - 3.973: 99.6212% ( 1) 00:12:42.554 3.973 - 4.000: 99.6260% ( 1) 00:12:42.554 4.107 - 4.133: 99.6308% ( 1) 00:12:42.554 4.373 - 4.400: 99.6356% ( 1) 00:12:42.554 4.427 - 4.453: 99.6404% ( 1) 00:12:42.554 4.480 - 4.507: 99.6452% ( 1) 00:12:42.554 4.507 - 4.533: 99.6499% ( 1) 00:12:42.554 4.533 - 4.560: 99.6547% ( 1) 00:12:42.554 4.613 - 4.640: 99.6595% ( 1) 00:12:42.554 4.667 - 4.693: 99.6691% ( 2) 00:12:42.554 4.773 - 4.800: 99.6739% ( 1) 00:12:42.554 4.880 - 4.907: 99.6835% ( 2) 00:12:42.554 4.960 - 4.987: 99.6931% ( 2) 00:12:42.554 4.987 - 5.013: 99.7027% ( 2) 00:12:42.554 5.067 - 5.093: 99.7123% ( 2) 00:12:42.554 5.147 - 5.173: 99.7219% ( 2) 00:12:42.554 5.173 - 5.200: 99.7267% ( 1) 00:12:42.554 5.760 - 5.787: 99.7315% ( 1) 00:12:42.554 5.787 - 5.813: 99.7363% ( 1) 00:12:42.554 5.840 - 5.867: 99.7411% ( 1) 00:12:42.554 5.893 - 5.920: 99.7459% ( 1) 00:12:42.554 5.973 - 6.000: 99.7506% ( 1) 00:12:42.554 6.027 - 6.053: 99.7554% ( 1) 00:12:42.554 6.133 - 6.160: 99.7602% ( 1) 00:12:42.554 6.213 - 6.240: 99.7698% ( 2) 00:12:42.554 6.320 - 6.347: 99.7746% ( 1) 00:12:42.554 6.373 - 6.400: 99.7794% ( 1) 00:12:42.554 6.427 - 6.453: 99.7842% ( 1) 00:12:42.554 6.480 - 6.507: 99.7890% ( 1) 00:12:42.554 6.587 - 6.613: 99.7938% ( 1) 00:12:42.554 6.667 - 6.693: 99.8034% ( 2) 00:12:42.554 6.720 - 6.747: 99.8082% ( 1) 00:12:42.554 6.773 - 6.800: 99.8130% ( 1) 00:12:42.554 6.800 - 6.827: 99.8178% ( 1) 00:12:42.554 6.880 - 6.933: 99.8370% ( 4) 00:12:42.554 6.987 - 7.040: 99.8418% ( 1) 00:12:42.554 7.040 - 7.093: 99.8513% ( 2) 00:12:42.554 7.147 - 7.200: 99.8561% ( 1) 00:12:42.554 7.200 - 7.253: 99.8609% ( 1) 00:12:42.554 7.360 - 7.413: 99.8657% ( 1) 00:12:42.554 7.413 - 7.467: 99.8705% ( 1) 00:12:42.554 7.467 - 7.520: 99.8753% ( 1) 00:12:42.554 7.520 - 7.573: 99.8849% ( 2) 00:12:42.554 7.573 - 7.627: 99.8897% ( 1) 00:12:42.554 8.000 - 8.053: 99.8945% ( 1) 00:12:42.554 9.867 - 9.920: 99.8993% ( 1) 00:12:42.554 13.013 - 13.067: 99.9041% ( 1) 00:12:42.554 118.613 - 119.467: 99.9089% ( 1) 00:12:42.554 3986.773 - 4014.080: 99.9952% ( 18) 00:12:42.554 4014.080 - 4041.387: 100.0000% ( 1) 00:12:42.554 00:12:42.554 Complete histogram 00:12:42.554 ================== 00:12:42.554 Range in us Cumulative Count 00:12:42.554 1.647 - 1.653: 0.0096% ( 2) 00:12:42.554 1.653 - 1.660: 0.0144% ( 1) 00:12:42.554 1.660 - 1.667: 0.3932% ( 79) 00:12:42.554 1.667 - 1.673: 1.0550% ( 138) 00:12:42.554 1.673 - 1.680: 1.1413% ( 18) 00:12:42.554 1.680 - 1.687: 1.3666% ( 47) 00:12:42.554 1.687 - 1.693: 1.4146% ( 10) 00:12:42.554 1.693 - 1.700: 1.4386% ( 5) 00:12:42.554 1.700 - 1.707: 1.4578% ( 4) 00:12:42.554 1.707 - 1.720: 19.8044% ( 3826) 00:12:42.554 1.720 - 1.733: 50.4891% ( 6399) 00:12:42.554 1.733 - 1.747: 72.9692% ( 4688) 00:12:42.554 1.747 - 1.760: 82.9002% ( 2071) 00:12:42.554 1.760 - 1.773: 84.7943% ( 395) 00:12:42.554 1.773 - 1.787: 87.7769% ( 622) 00:12:42.554 1.787 - 1.800: 92.5146% ( 988) 00:12:42.554 1.800 - 1.813: 96.3077% ( 791) 00:12:42.554 1.813 - 1.827: 98.4607% ( 449) 00:12:42.554 1.827 - 1.840: 99.2663% ( 168) 00:12:42.554 1.840 - 1.853: 99.3718% ( 22) 00:12:42.554 1.853 - 1.867: 99.3910% ( 4) 00:12:42.554 1.867 - 1.880: 99.3958% ( 1) 00:12:42.554 1.880 - 1.893: 99.4006% ( 1) 00:12:42.554 1.893 - 1.907: 99.4102% ( 2) 00:12:42.554 1.920 - 1.933: 99.4150% ( 1) 00:12:42.554 1.933 - 1.947: 99.4198% ( 1) 00:12:42.554 1.947 - 1.960: 99.4294% ( 2) 00:12:42.554 1.987 - 2.000: 99.4342% ( 1) 00:12:42.554 2.120 - 2.133: 99.4390% ( 1) 00:12:42.554 2.227 - 2.240: 99.4438% ( 1) 00:12:42.554 2.293 - 2.307: 99.4485% ( 1) 00:12:42.554 3.333 - 3.347: 99.4581% ( 2) 00:12:42.554 3.387 - 3.400: 99.4629% ( 1) 00:12:42.554 3.413 - 3.440: 99.4677% ( 1) 00:12:42.554 3.493 - 3.520: 99.4725% ( 1) 00:12:42.554 3.547 - 3.573: 99.4773% ( 1) 00:12:42.554 3.627 - 3.653: 99.4821% ( 1) 00:12:42.554 3.653 - 3.680: 99.4869% ( 1) 00:12:42.554 3.787 - 3.813: 99.4917% ( 1) 00:12:42.554 3.840 - 3.867: 99.4965% ( 1) 00:12:42.554 4.373 - 4.400: 99.5013% ( 1) 00:12:42.555 4.427 - 4.453: 99.5061% ( 1) 00:12:42.555 4.453 - 4.480: 99.5109% ( 1) 00:12:42.555 4.480 - 4.507: 99.5157% ( 1) 00:12:42.555 4.640 - 4.667: 99.5205% ( 1) 00:12:42.555 4.693 - 4.720: 99.5253% ( 1) 00:12:42.555 4.773 - 4.800: 99.5301% ( 1) 00:12:42.555 4.800 - 4.827: 99.5349% ( 1) 00:12:42.555 5.013 - 5.040: 99.5397% ( 1) 00:12:42.555 5.147 - 5.173: 99.5445% ( 1) 00:12:42.555 5.173 - 5.200: 99.5492% ( 1) 00:12:42.555 5.253 - 5.280: 99.5588% ( 2) 00:12:42.555 5.280 - 5.307: 99.5636% ( 1) 00:12:42.555 5.467 - 5.493: 99.5732% ( 2) 00:12:42.555 5.600 - 5.627: 99.5780% ( 1) 00:12:42.555 5.707 - 5.733: 99.5828% ( 1) 00:12:42.555 5.733 - 5.760: 99.5924% ( 2) 00:12:42.555 5.760 - 5.787: 99.5972% ( 1) 00:12:42.555 5.813 - 5.840: 99.6068% ( 2) 00:12:42.555 5.920 - 5.947: 99.6116% ( 1) 00:12:42.555 5.947 - 5.973: 99.6164% ( 1) 00:12:42.555 6.000 - 6.027: 99.6212% ( 1) 00:12:42.555 6.160 - 6.187: 99.6260% ( 1) 00:12:42.555 6.347 - 6.373: 99.6308% ( 1) 00:12:42.555 6.427 - 6.453: 99.6356% ( 1) 00:12:42.555 6.507 - 6.533: 99.6404% ( 1) 00:12:42.555 7.200 - 7.253: 99.6452% ( 1) 00:12:42.555 7.573 - 7.627: 99.6499% ( 1) 00:12:42.555 12.960 - 13.013: 99.6547% ( 1) 00:12:42.555 34.347 - 34.560: 99.6595% ( 1) 00:12:42.555 3986.773 - 4014.0[2024-07-12 10:50:59.537265] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:42.815 80: 99.9952% ( 70) 00:12:42.815 4068.693 - 4096.000: 100.0000% ( 1) 00:12:42.815 00:12:42.815 10:50:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:42.815 10:50:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:42.815 10:50:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:42.815 10:50:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:42.815 10:50:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:42.815 [ 00:12:42.815 { 00:12:42.815 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:42.816 "subtype": "Discovery", 00:12:42.816 "listen_addresses": [], 00:12:42.816 "allow_any_host": true, 00:12:42.816 "hosts": [] 00:12:42.816 }, 00:12:42.816 { 00:12:42.816 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:42.816 "subtype": "NVMe", 00:12:42.816 "listen_addresses": [ 00:12:42.816 { 00:12:42.816 "trtype": "VFIOUSER", 00:12:42.816 "adrfam": "IPv4", 00:12:42.816 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:42.816 "trsvcid": "0" 00:12:42.816 } 00:12:42.816 ], 00:12:42.816 "allow_any_host": true, 00:12:42.816 "hosts": [], 00:12:42.816 "serial_number": "SPDK1", 00:12:42.816 "model_number": "SPDK bdev Controller", 00:12:42.816 "max_namespaces": 32, 00:12:42.816 "min_cntlid": 1, 00:12:42.816 "max_cntlid": 65519, 00:12:42.816 "namespaces": [ 00:12:42.816 { 00:12:42.816 "nsid": 1, 00:12:42.816 "bdev_name": "Malloc1", 00:12:42.816 "name": "Malloc1", 00:12:42.816 "nguid": "3A726C69B64745C69EBFD24F985C28DC", 00:12:42.816 "uuid": "3a726c69-b647-45c6-9ebf-d24f985c28dc" 00:12:42.816 } 00:12:42.816 ] 00:12:42.816 }, 00:12:42.816 { 00:12:42.816 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:42.816 "subtype": "NVMe", 00:12:42.816 "listen_addresses": [ 00:12:42.816 { 00:12:42.816 "trtype": "VFIOUSER", 00:12:42.816 "adrfam": "IPv4", 00:12:42.816 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:42.816 "trsvcid": "0" 00:12:42.816 } 00:12:42.816 ], 00:12:42.816 "allow_any_host": true, 00:12:42.816 "hosts": [], 00:12:42.816 "serial_number": "SPDK2", 00:12:42.816 "model_number": "SPDK bdev Controller", 00:12:42.816 "max_namespaces": 32, 00:12:42.816 "min_cntlid": 1, 00:12:42.816 "max_cntlid": 65519, 00:12:42.816 "namespaces": [ 00:12:42.816 { 00:12:42.816 "nsid": 1, 00:12:42.816 "bdev_name": "Malloc2", 00:12:42.816 "name": "Malloc2", 00:12:42.816 "nguid": "78137F6CE0B64C549DBE31D9D16F6A84", 00:12:42.816 "uuid": "78137f6c-e0b6-4c54-9dbe-31d9d16f6a84" 00:12:42.816 } 00:12:42.816 ] 00:12:42.816 } 00:12:42.816 ] 00:12:42.816 10:50:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:42.816 10:50:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:42.816 10:50:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2006078 00:12:42.816 10:50:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:42.816 10:50:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:42.816 10:50:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:42.816 10:50:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:42.816 10:50:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:42.816 10:50:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:42.816 10:50:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:42.816 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.080 [2024-07-12 10:50:59.881526] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:43.080 Malloc3 00:12:43.080 10:50:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:43.425 [2024-07-12 10:51:00.075913] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:43.425 10:51:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:43.425 Asynchronous Event Request test 00:12:43.425 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:43.425 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:43.425 Registering asynchronous event callbacks... 00:12:43.425 Starting namespace attribute notice tests for all controllers... 00:12:43.425 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:43.425 aer_cb - Changed Namespace 00:12:43.425 Cleaning up... 00:12:43.425 [ 00:12:43.425 { 00:12:43.425 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:43.425 "subtype": "Discovery", 00:12:43.425 "listen_addresses": [], 00:12:43.425 "allow_any_host": true, 00:12:43.425 "hosts": [] 00:12:43.425 }, 00:12:43.425 { 00:12:43.425 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:43.425 "subtype": "NVMe", 00:12:43.425 "listen_addresses": [ 00:12:43.425 { 00:12:43.425 "trtype": "VFIOUSER", 00:12:43.425 "adrfam": "IPv4", 00:12:43.425 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:43.425 "trsvcid": "0" 00:12:43.425 } 00:12:43.425 ], 00:12:43.425 "allow_any_host": true, 00:12:43.425 "hosts": [], 00:12:43.425 "serial_number": "SPDK1", 00:12:43.425 "model_number": "SPDK bdev Controller", 00:12:43.425 "max_namespaces": 32, 00:12:43.425 "min_cntlid": 1, 00:12:43.425 "max_cntlid": 65519, 00:12:43.425 "namespaces": [ 00:12:43.425 { 00:12:43.425 "nsid": 1, 00:12:43.425 "bdev_name": "Malloc1", 00:12:43.425 "name": "Malloc1", 00:12:43.425 "nguid": "3A726C69B64745C69EBFD24F985C28DC", 00:12:43.425 "uuid": "3a726c69-b647-45c6-9ebf-d24f985c28dc" 00:12:43.425 }, 00:12:43.425 { 00:12:43.425 "nsid": 2, 00:12:43.425 "bdev_name": "Malloc3", 00:12:43.425 "name": "Malloc3", 00:12:43.425 "nguid": "49DDB6D893FF496496F89AA8380FD819", 00:12:43.425 "uuid": "49ddb6d8-93ff-4964-96f8-9aa8380fd819" 00:12:43.425 } 00:12:43.425 ] 00:12:43.425 }, 00:12:43.425 { 00:12:43.425 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:43.425 "subtype": "NVMe", 00:12:43.425 "listen_addresses": [ 00:12:43.425 { 00:12:43.425 "trtype": "VFIOUSER", 00:12:43.425 "adrfam": "IPv4", 00:12:43.425 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:43.425 "trsvcid": "0" 00:12:43.425 } 00:12:43.425 ], 00:12:43.425 "allow_any_host": true, 00:12:43.425 "hosts": [], 00:12:43.425 "serial_number": "SPDK2", 00:12:43.425 "model_number": "SPDK bdev Controller", 00:12:43.425 "max_namespaces": 32, 00:12:43.425 "min_cntlid": 1, 00:12:43.425 "max_cntlid": 65519, 00:12:43.425 "namespaces": [ 00:12:43.425 { 00:12:43.425 "nsid": 1, 00:12:43.425 "bdev_name": "Malloc2", 00:12:43.425 "name": "Malloc2", 00:12:43.425 "nguid": "78137F6CE0B64C549DBE31D9D16F6A84", 00:12:43.425 "uuid": "78137f6c-e0b6-4c54-9dbe-31d9d16f6a84" 00:12:43.425 } 00:12:43.425 ] 00:12:43.425 } 00:12:43.425 ] 00:12:43.425 10:51:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2006078 00:12:43.425 10:51:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:43.425 10:51:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:43.425 10:51:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:43.425 10:51:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:43.426 [2024-07-12 10:51:00.283998] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:43.426 [2024-07-12 10:51:00.284034] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2006113 ] 00:12:43.426 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.426 [2024-07-12 10:51:00.311222] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:43.426 [2024-07-12 10:51:00.321363] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:43.426 [2024-07-12 10:51:00.321380] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f72fa960000 00:12:43.426 [2024-07-12 10:51:00.322362] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:43.426 [2024-07-12 10:51:00.323365] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:43.426 [2024-07-12 10:51:00.324375] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:43.426 [2024-07-12 10:51:00.325382] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:43.426 [2024-07-12 10:51:00.326388] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:43.426 [2024-07-12 10:51:00.327399] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:43.426 [2024-07-12 10:51:00.328409] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:43.426 [2024-07-12 10:51:00.329422] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:43.426 [2024-07-12 10:51:00.330426] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:43.426 [2024-07-12 10:51:00.330433] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f72fa955000 00:12:43.426 [2024-07-12 10:51:00.331346] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:43.426 [2024-07-12 10:51:00.343740] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:43.426 [2024-07-12 10:51:00.343761] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:43.426 [2024-07-12 10:51:00.345809] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:43.426 [2024-07-12 10:51:00.345841] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:43.426 [2024-07-12 10:51:00.345903] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:43.426 [2024-07-12 10:51:00.345915] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:43.426 [2024-07-12 10:51:00.345920] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:43.426 [2024-07-12 10:51:00.346810] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:43.426 [2024-07-12 10:51:00.346817] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:43.426 [2024-07-12 10:51:00.346822] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:43.426 [2024-07-12 10:51:00.347811] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:43.426 [2024-07-12 10:51:00.347821] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:43.426 [2024-07-12 10:51:00.347826] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:43.426 [2024-07-12 10:51:00.348816] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:43.426 [2024-07-12 10:51:00.348823] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:43.426 [2024-07-12 10:51:00.349824] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:43.426 [2024-07-12 10:51:00.349831] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:43.426 [2024-07-12 10:51:00.349835] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:43.426 [2024-07-12 10:51:00.349839] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:43.426 [2024-07-12 10:51:00.349944] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:43.426 [2024-07-12 10:51:00.349947] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:43.426 [2024-07-12 10:51:00.349951] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:43.426 [2024-07-12 10:51:00.350833] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:43.426 [2024-07-12 10:51:00.351842] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:43.426 [2024-07-12 10:51:00.352845] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:43.426 [2024-07-12 10:51:00.353845] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:43.426 [2024-07-12 10:51:00.353882] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:43.426 [2024-07-12 10:51:00.354855] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:43.426 [2024-07-12 10:51:00.354861] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:43.426 [2024-07-12 10:51:00.354865] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:43.426 [2024-07-12 10:51:00.354879] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:43.426 [2024-07-12 10:51:00.354885] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:43.426 [2024-07-12 10:51:00.354896] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:43.426 [2024-07-12 10:51:00.354899] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:43.426 [2024-07-12 10:51:00.354910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:43.426 [2024-07-12 10:51:00.365131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:43.426 [2024-07-12 10:51:00.365140] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:43.426 [2024-07-12 10:51:00.365146] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:43.426 [2024-07-12 10:51:00.365150] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:43.426 [2024-07-12 10:51:00.365153] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:43.426 [2024-07-12 10:51:00.365157] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:43.426 [2024-07-12 10:51:00.365160] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:43.426 [2024-07-12 10:51:00.365163] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:43.426 [2024-07-12 10:51:00.365169] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:43.426 [2024-07-12 10:51:00.365176] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:43.426 [2024-07-12 10:51:00.373129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:43.426 [2024-07-12 10:51:00.373142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.426 [2024-07-12 10:51:00.373148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.426 [2024-07-12 10:51:00.373155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.426 [2024-07-12 10:51:00.373161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.426 [2024-07-12 10:51:00.373164] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:43.426 [2024-07-12 10:51:00.373170] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:43.426 [2024-07-12 10:51:00.373177] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:43.426 [2024-07-12 10:51:00.381128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:43.426 [2024-07-12 10:51:00.381134] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:43.426 [2024-07-12 10:51:00.381138] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:43.426 [2024-07-12 10:51:00.381143] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:43.426 [2024-07-12 10:51:00.381148] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:43.426 [2024-07-12 10:51:00.381155] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:43.689 [2024-07-12 10:51:00.389131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:43.689 [2024-07-12 10:51:00.389180] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:43.689 [2024-07-12 10:51:00.389186] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:43.689 [2024-07-12 10:51:00.389192] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:43.689 [2024-07-12 10:51:00.389195] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:43.689 [2024-07-12 10:51:00.389200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:43.689 [2024-07-12 10:51:00.397127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:43.689 [2024-07-12 10:51:00.397137] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:43.689 [2024-07-12 10:51:00.397147] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:43.689 [2024-07-12 10:51:00.397153] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:43.689 [2024-07-12 10:51:00.397158] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:43.689 [2024-07-12 10:51:00.397161] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:43.689 [2024-07-12 10:51:00.397165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:43.689 [2024-07-12 10:51:00.405129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:43.689 [2024-07-12 10:51:00.405141] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:43.689 [2024-07-12 10:51:00.405146] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:43.689 [2024-07-12 10:51:00.405151] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:43.689 [2024-07-12 10:51:00.405154] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:43.689 [2024-07-12 10:51:00.405158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:43.689 [2024-07-12 10:51:00.413129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:43.689 [2024-07-12 10:51:00.413136] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:43.689 [2024-07-12 10:51:00.413141] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:43.689 [2024-07-12 10:51:00.413147] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:43.689 [2024-07-12 10:51:00.413151] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:12:43.689 [2024-07-12 10:51:00.413155] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:43.689 [2024-07-12 10:51:00.413158] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:43.689 [2024-07-12 10:51:00.413162] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:43.689 [2024-07-12 10:51:00.413167] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:43.689 [2024-07-12 10:51:00.413170] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:43.689 [2024-07-12 10:51:00.413184] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:43.689 [2024-07-12 10:51:00.421128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:43.689 [2024-07-12 10:51:00.421139] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:43.689 [2024-07-12 10:51:00.429131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:43.689 [2024-07-12 10:51:00.429141] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:43.689 [2024-07-12 10:51:00.437130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:43.689 [2024-07-12 10:51:00.437139] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:43.689 [2024-07-12 10:51:00.445128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:43.689 [2024-07-12 10:51:00.445141] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:43.689 [2024-07-12 10:51:00.445144] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:43.689 [2024-07-12 10:51:00.445146] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:43.689 [2024-07-12 10:51:00.445149] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:43.689 [2024-07-12 10:51:00.445153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:43.689 [2024-07-12 10:51:00.445158] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:43.689 [2024-07-12 10:51:00.445161] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:43.689 [2024-07-12 10:51:00.445165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:43.689 [2024-07-12 10:51:00.445170] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:43.689 [2024-07-12 10:51:00.445173] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:43.689 [2024-07-12 10:51:00.445177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:43.689 [2024-07-12 10:51:00.445183] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:43.689 [2024-07-12 10:51:00.445186] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:43.689 [2024-07-12 10:51:00.445190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:43.689 [2024-07-12 10:51:00.453130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:43.689 [2024-07-12 10:51:00.453141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:43.689 [2024-07-12 10:51:00.453149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:43.689 [2024-07-12 10:51:00.453155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:43.689 ===================================================== 00:12:43.689 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:43.689 ===================================================== 00:12:43.689 Controller Capabilities/Features 00:12:43.689 ================================ 00:12:43.689 Vendor ID: 4e58 00:12:43.689 Subsystem Vendor ID: 4e58 00:12:43.689 Serial Number: SPDK2 00:12:43.689 Model Number: SPDK bdev Controller 00:12:43.689 Firmware Version: 24.09 00:12:43.689 Recommended Arb Burst: 6 00:12:43.689 IEEE OUI Identifier: 8d 6b 50 00:12:43.689 Multi-path I/O 00:12:43.689 May have multiple subsystem ports: Yes 00:12:43.689 May have multiple controllers: Yes 00:12:43.689 Associated with SR-IOV VF: No 00:12:43.689 Max Data Transfer Size: 131072 00:12:43.689 Max Number of Namespaces: 32 00:12:43.689 Max Number of I/O Queues: 127 00:12:43.689 NVMe Specification Version (VS): 1.3 00:12:43.689 NVMe Specification Version (Identify): 1.3 00:12:43.689 Maximum Queue Entries: 256 00:12:43.689 Contiguous Queues Required: Yes 00:12:43.689 Arbitration Mechanisms Supported 00:12:43.689 Weighted Round Robin: Not Supported 00:12:43.689 Vendor Specific: Not Supported 00:12:43.689 Reset Timeout: 15000 ms 00:12:43.689 Doorbell Stride: 4 bytes 00:12:43.689 NVM Subsystem Reset: Not Supported 00:12:43.689 Command Sets Supported 00:12:43.689 NVM Command Set: Supported 00:12:43.689 Boot Partition: Not Supported 00:12:43.690 Memory Page Size Minimum: 4096 bytes 00:12:43.690 Memory Page Size Maximum: 4096 bytes 00:12:43.690 Persistent Memory Region: Not Supported 00:12:43.690 Optional Asynchronous Events Supported 00:12:43.690 Namespace Attribute Notices: Supported 00:12:43.690 Firmware Activation Notices: Not Supported 00:12:43.690 ANA Change Notices: Not Supported 00:12:43.690 PLE Aggregate Log Change Notices: Not Supported 00:12:43.690 LBA Status Info Alert Notices: Not Supported 00:12:43.690 EGE Aggregate Log Change Notices: Not Supported 00:12:43.690 Normal NVM Subsystem Shutdown event: Not Supported 00:12:43.690 Zone Descriptor Change Notices: Not Supported 00:12:43.690 Discovery Log Change Notices: Not Supported 00:12:43.690 Controller Attributes 00:12:43.690 128-bit Host Identifier: Supported 00:12:43.690 Non-Operational Permissive Mode: Not Supported 00:12:43.690 NVM Sets: Not Supported 00:12:43.690 Read Recovery Levels: Not Supported 00:12:43.690 Endurance Groups: Not Supported 00:12:43.690 Predictable Latency Mode: Not Supported 00:12:43.690 Traffic Based Keep ALive: Not Supported 00:12:43.690 Namespace Granularity: Not Supported 00:12:43.690 SQ Associations: Not Supported 00:12:43.690 UUID List: Not Supported 00:12:43.690 Multi-Domain Subsystem: Not Supported 00:12:43.690 Fixed Capacity Management: Not Supported 00:12:43.690 Variable Capacity Management: Not Supported 00:12:43.690 Delete Endurance Group: Not Supported 00:12:43.690 Delete NVM Set: Not Supported 00:12:43.690 Extended LBA Formats Supported: Not Supported 00:12:43.690 Flexible Data Placement Supported: Not Supported 00:12:43.690 00:12:43.690 Controller Memory Buffer Support 00:12:43.690 ================================ 00:12:43.690 Supported: No 00:12:43.690 00:12:43.690 Persistent Memory Region Support 00:12:43.690 ================================ 00:12:43.690 Supported: No 00:12:43.690 00:12:43.690 Admin Command Set Attributes 00:12:43.690 ============================ 00:12:43.690 Security Send/Receive: Not Supported 00:12:43.690 Format NVM: Not Supported 00:12:43.690 Firmware Activate/Download: Not Supported 00:12:43.690 Namespace Management: Not Supported 00:12:43.690 Device Self-Test: Not Supported 00:12:43.690 Directives: Not Supported 00:12:43.690 NVMe-MI: Not Supported 00:12:43.690 Virtualization Management: Not Supported 00:12:43.690 Doorbell Buffer Config: Not Supported 00:12:43.690 Get LBA Status Capability: Not Supported 00:12:43.690 Command & Feature Lockdown Capability: Not Supported 00:12:43.690 Abort Command Limit: 4 00:12:43.690 Async Event Request Limit: 4 00:12:43.690 Number of Firmware Slots: N/A 00:12:43.690 Firmware Slot 1 Read-Only: N/A 00:12:43.690 Firmware Activation Without Reset: N/A 00:12:43.690 Multiple Update Detection Support: N/A 00:12:43.690 Firmware Update Granularity: No Information Provided 00:12:43.690 Per-Namespace SMART Log: No 00:12:43.690 Asymmetric Namespace Access Log Page: Not Supported 00:12:43.690 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:43.690 Command Effects Log Page: Supported 00:12:43.690 Get Log Page Extended Data: Supported 00:12:43.690 Telemetry Log Pages: Not Supported 00:12:43.690 Persistent Event Log Pages: Not Supported 00:12:43.690 Supported Log Pages Log Page: May Support 00:12:43.690 Commands Supported & Effects Log Page: Not Supported 00:12:43.690 Feature Identifiers & Effects Log Page:May Support 00:12:43.690 NVMe-MI Commands & Effects Log Page: May Support 00:12:43.690 Data Area 4 for Telemetry Log: Not Supported 00:12:43.690 Error Log Page Entries Supported: 128 00:12:43.690 Keep Alive: Supported 00:12:43.690 Keep Alive Granularity: 10000 ms 00:12:43.690 00:12:43.690 NVM Command Set Attributes 00:12:43.690 ========================== 00:12:43.690 Submission Queue Entry Size 00:12:43.690 Max: 64 00:12:43.690 Min: 64 00:12:43.690 Completion Queue Entry Size 00:12:43.690 Max: 16 00:12:43.690 Min: 16 00:12:43.690 Number of Namespaces: 32 00:12:43.690 Compare Command: Supported 00:12:43.690 Write Uncorrectable Command: Not Supported 00:12:43.690 Dataset Management Command: Supported 00:12:43.690 Write Zeroes Command: Supported 00:12:43.690 Set Features Save Field: Not Supported 00:12:43.690 Reservations: Not Supported 00:12:43.690 Timestamp: Not Supported 00:12:43.690 Copy: Supported 00:12:43.690 Volatile Write Cache: Present 00:12:43.690 Atomic Write Unit (Normal): 1 00:12:43.690 Atomic Write Unit (PFail): 1 00:12:43.690 Atomic Compare & Write Unit: 1 00:12:43.690 Fused Compare & Write: Supported 00:12:43.690 Scatter-Gather List 00:12:43.690 SGL Command Set: Supported (Dword aligned) 00:12:43.690 SGL Keyed: Not Supported 00:12:43.690 SGL Bit Bucket Descriptor: Not Supported 00:12:43.690 SGL Metadata Pointer: Not Supported 00:12:43.690 Oversized SGL: Not Supported 00:12:43.690 SGL Metadata Address: Not Supported 00:12:43.690 SGL Offset: Not Supported 00:12:43.690 Transport SGL Data Block: Not Supported 00:12:43.690 Replay Protected Memory Block: Not Supported 00:12:43.690 00:12:43.690 Firmware Slot Information 00:12:43.690 ========================= 00:12:43.690 Active slot: 1 00:12:43.690 Slot 1 Firmware Revision: 24.09 00:12:43.690 00:12:43.690 00:12:43.690 Commands Supported and Effects 00:12:43.690 ============================== 00:12:43.690 Admin Commands 00:12:43.690 -------------- 00:12:43.690 Get Log Page (02h): Supported 00:12:43.690 Identify (06h): Supported 00:12:43.690 Abort (08h): Supported 00:12:43.690 Set Features (09h): Supported 00:12:43.690 Get Features (0Ah): Supported 00:12:43.690 Asynchronous Event Request (0Ch): Supported 00:12:43.690 Keep Alive (18h): Supported 00:12:43.690 I/O Commands 00:12:43.690 ------------ 00:12:43.690 Flush (00h): Supported LBA-Change 00:12:43.690 Write (01h): Supported LBA-Change 00:12:43.690 Read (02h): Supported 00:12:43.690 Compare (05h): Supported 00:12:43.690 Write Zeroes (08h): Supported LBA-Change 00:12:43.690 Dataset Management (09h): Supported LBA-Change 00:12:43.690 Copy (19h): Supported LBA-Change 00:12:43.690 00:12:43.690 Error Log 00:12:43.690 ========= 00:12:43.690 00:12:43.690 Arbitration 00:12:43.690 =========== 00:12:43.690 Arbitration Burst: 1 00:12:43.690 00:12:43.690 Power Management 00:12:43.690 ================ 00:12:43.690 Number of Power States: 1 00:12:43.690 Current Power State: Power State #0 00:12:43.690 Power State #0: 00:12:43.690 Max Power: 0.00 W 00:12:43.690 Non-Operational State: Operational 00:12:43.690 Entry Latency: Not Reported 00:12:43.690 Exit Latency: Not Reported 00:12:43.690 Relative Read Throughput: 0 00:12:43.690 Relative Read Latency: 0 00:12:43.690 Relative Write Throughput: 0 00:12:43.690 Relative Write Latency: 0 00:12:43.690 Idle Power: Not Reported 00:12:43.690 Active Power: Not Reported 00:12:43.690 Non-Operational Permissive Mode: Not Supported 00:12:43.690 00:12:43.690 Health Information 00:12:43.690 ================== 00:12:43.690 Critical Warnings: 00:12:43.690 Available Spare Space: OK 00:12:43.690 Temperature: OK 00:12:43.690 Device Reliability: OK 00:12:43.690 Read Only: No 00:12:43.690 Volatile Memory Backup: OK 00:12:43.690 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:43.690 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:43.690 Available Spare: 0% 00:12:43.690 Available Sp[2024-07-12 10:51:00.453225] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:43.690 [2024-07-12 10:51:00.461130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:43.690 [2024-07-12 10:51:00.461155] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:43.690 [2024-07-12 10:51:00.461162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.690 [2024-07-12 10:51:00.461167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.690 [2024-07-12 10:51:00.461171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.690 [2024-07-12 10:51:00.461175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.690 [2024-07-12 10:51:00.461219] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:43.690 [2024-07-12 10:51:00.461228] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:43.690 [2024-07-12 10:51:00.462225] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:43.690 [2024-07-12 10:51:00.462262] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:43.690 [2024-07-12 10:51:00.462267] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:43.690 [2024-07-12 10:51:00.463226] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:43.690 [2024-07-12 10:51:00.463236] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:43.690 [2024-07-12 10:51:00.463285] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:43.690 [2024-07-12 10:51:00.464243] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:43.690 are Threshold: 0% 00:12:43.690 Life Percentage Used: 0% 00:12:43.690 Data Units Read: 0 00:12:43.690 Data Units Written: 0 00:12:43.690 Host Read Commands: 0 00:12:43.690 Host Write Commands: 0 00:12:43.690 Controller Busy Time: 0 minutes 00:12:43.691 Power Cycles: 0 00:12:43.691 Power On Hours: 0 hours 00:12:43.691 Unsafe Shutdowns: 0 00:12:43.691 Unrecoverable Media Errors: 0 00:12:43.691 Lifetime Error Log Entries: 0 00:12:43.691 Warning Temperature Time: 0 minutes 00:12:43.691 Critical Temperature Time: 0 minutes 00:12:43.691 00:12:43.691 Number of Queues 00:12:43.691 ================ 00:12:43.691 Number of I/O Submission Queues: 127 00:12:43.691 Number of I/O Completion Queues: 127 00:12:43.691 00:12:43.691 Active Namespaces 00:12:43.691 ================= 00:12:43.691 Namespace ID:1 00:12:43.691 Error Recovery Timeout: Unlimited 00:12:43.691 Command Set Identifier: NVM (00h) 00:12:43.691 Deallocate: Supported 00:12:43.691 Deallocated/Unwritten Error: Not Supported 00:12:43.691 Deallocated Read Value: Unknown 00:12:43.691 Deallocate in Write Zeroes: Not Supported 00:12:43.691 Deallocated Guard Field: 0xFFFF 00:12:43.691 Flush: Supported 00:12:43.691 Reservation: Supported 00:12:43.691 Namespace Sharing Capabilities: Multiple Controllers 00:12:43.691 Size (in LBAs): 131072 (0GiB) 00:12:43.691 Capacity (in LBAs): 131072 (0GiB) 00:12:43.691 Utilization (in LBAs): 131072 (0GiB) 00:12:43.691 NGUID: 78137F6CE0B64C549DBE31D9D16F6A84 00:12:43.691 UUID: 78137f6c-e0b6-4c54-9dbe-31d9d16f6a84 00:12:43.691 Thin Provisioning: Not Supported 00:12:43.691 Per-NS Atomic Units: Yes 00:12:43.691 Atomic Boundary Size (Normal): 0 00:12:43.691 Atomic Boundary Size (PFail): 0 00:12:43.691 Atomic Boundary Offset: 0 00:12:43.691 Maximum Single Source Range Length: 65535 00:12:43.691 Maximum Copy Length: 65535 00:12:43.691 Maximum Source Range Count: 1 00:12:43.691 NGUID/EUI64 Never Reused: No 00:12:43.691 Namespace Write Protected: No 00:12:43.691 Number of LBA Formats: 1 00:12:43.691 Current LBA Format: LBA Format #00 00:12:43.691 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:43.691 00:12:43.691 10:51:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:43.691 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.691 [2024-07-12 10:51:00.631498] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:48.977 Initializing NVMe Controllers 00:12:48.977 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:48.977 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:48.977 Initialization complete. Launching workers. 00:12:48.977 ======================================================== 00:12:48.977 Latency(us) 00:12:48.977 Device Information : IOPS MiB/s Average min max 00:12:48.977 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39985.60 156.19 3203.51 830.28 8801.75 00:12:48.977 ======================================================== 00:12:48.977 Total : 39985.60 156.19 3203.51 830.28 8801.75 00:12:48.977 00:12:48.977 [2024-07-12 10:51:05.740305] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:48.977 10:51:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:48.977 EAL: No free 2048 kB hugepages reported on node 1 00:12:48.977 [2024-07-12 10:51:05.920873] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:54.274 Initializing NVMe Controllers 00:12:54.274 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:54.274 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:54.274 Initialization complete. Launching workers. 00:12:54.274 ======================================================== 00:12:54.274 Latency(us) 00:12:54.274 Device Information : IOPS MiB/s Average min max 00:12:54.274 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39951.98 156.06 3203.72 838.75 7775.40 00:12:54.274 ======================================================== 00:12:54.274 Total : 39951.98 156.06 3203.72 838.75 7775.40 00:12:54.274 00:12:54.274 [2024-07-12 10:51:10.938852] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:54.274 10:51:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:54.274 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.274 [2024-07-12 10:51:11.128038] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:59.566 [2024-07-12 10:51:16.261206] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:59.566 Initializing NVMe Controllers 00:12:59.566 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:59.566 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:59.566 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:59.566 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:59.566 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:59.566 Initialization complete. Launching workers. 00:12:59.566 Starting thread on core 2 00:12:59.566 Starting thread on core 3 00:12:59.566 Starting thread on core 1 00:12:59.566 10:51:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:59.566 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.566 [2024-07-12 10:51:16.495584] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:02.869 [2024-07-12 10:51:19.566414] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:02.869 Initializing NVMe Controllers 00:13:02.869 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:02.869 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:02.869 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:02.869 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:02.869 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:02.869 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:02.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:02.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:02.869 Initialization complete. Launching workers. 00:13:02.869 Starting thread on core 1 with urgent priority queue 00:13:02.869 Starting thread on core 2 with urgent priority queue 00:13:02.869 Starting thread on core 3 with urgent priority queue 00:13:02.869 Starting thread on core 0 with urgent priority queue 00:13:02.869 SPDK bdev Controller (SPDK2 ) core 0: 10460.00 IO/s 9.56 secs/100000 ios 00:13:02.869 SPDK bdev Controller (SPDK2 ) core 1: 8053.67 IO/s 12.42 secs/100000 ios 00:13:02.869 SPDK bdev Controller (SPDK2 ) core 2: 11395.00 IO/s 8.78 secs/100000 ios 00:13:02.869 SPDK bdev Controller (SPDK2 ) core 3: 10920.00 IO/s 9.16 secs/100000 ios 00:13:02.869 ======================================================== 00:13:02.869 00:13:02.869 10:51:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:02.869 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.869 [2024-07-12 10:51:19.789539] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:02.869 Initializing NVMe Controllers 00:13:02.869 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:02.869 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:02.869 Namespace ID: 1 size: 0GB 00:13:02.869 Initialization complete. 00:13:02.869 INFO: using host memory buffer for IO 00:13:02.869 Hello world! 00:13:02.869 [2024-07-12 10:51:19.799613] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:02.869 10:51:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:03.129 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.129 [2024-07-12 10:51:20.027339] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:04.514 Initializing NVMe Controllers 00:13:04.514 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:04.514 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:04.514 Initialization complete. Launching workers. 00:13:04.514 submit (in ns) avg, min, max = 6617.6, 2825.8, 4000190.0 00:13:04.514 complete (in ns) avg, min, max = 17231.9, 1644.2, 6989320.8 00:13:04.514 00:13:04.514 Submit histogram 00:13:04.514 ================ 00:13:04.514 Range in us Cumulative Count 00:13:04.514 2.813 - 2.827: 0.0048% ( 1) 00:13:04.514 2.827 - 2.840: 0.6614% ( 137) 00:13:04.514 2.840 - 2.853: 2.4492% ( 373) 00:13:04.514 2.853 - 2.867: 5.4640% ( 629) 00:13:04.514 2.867 - 2.880: 11.0957% ( 1175) 00:13:04.514 2.880 - 2.893: 16.0324% ( 1030) 00:13:04.514 2.893 - 2.907: 20.1400% ( 857) 00:13:04.514 2.907 - 2.920: 25.7477% ( 1170) 00:13:04.514 2.920 - 2.933: 30.6605% ( 1025) 00:13:04.514 2.933 - 2.947: 36.5462% ( 1228) 00:13:04.514 2.947 - 2.960: 41.8280% ( 1102) 00:13:04.514 2.960 - 2.973: 46.9852% ( 1076) 00:13:04.514 2.973 - 2.987: 52.7416% ( 1201) 00:13:04.514 2.987 - 3.000: 60.7793% ( 1677) 00:13:04.514 3.000 - 3.013: 70.7439% ( 2079) 00:13:04.514 3.013 - 3.027: 79.6252% ( 1853) 00:13:04.514 3.027 - 3.040: 86.1963% ( 1371) 00:13:04.514 3.040 - 3.053: 91.9095% ( 1192) 00:13:04.514 3.053 - 3.067: 95.3748% ( 723) 00:13:04.514 3.067 - 3.080: 97.8144% ( 509) 00:13:04.514 3.080 - 3.093: 98.8401% ( 214) 00:13:04.514 3.093 - 3.107: 99.2715% ( 90) 00:13:04.514 3.107 - 3.120: 99.4440% ( 36) 00:13:04.514 3.120 - 3.133: 99.5351% ( 19) 00:13:04.514 3.133 - 3.147: 99.5638% ( 6) 00:13:04.514 3.147 - 3.160: 99.5686% ( 1) 00:13:04.514 3.173 - 3.187: 99.5734% ( 1) 00:13:04.514 3.240 - 3.253: 99.5782% ( 1) 00:13:04.514 3.520 - 3.547: 99.5830% ( 1) 00:13:04.514 3.600 - 3.627: 99.5878% ( 1) 00:13:04.514 3.653 - 3.680: 99.5926% ( 1) 00:13:04.514 3.707 - 3.733: 99.5974% ( 1) 00:13:04.514 3.787 - 3.813: 99.6022% ( 1) 00:13:04.514 3.920 - 3.947: 99.6070% ( 1) 00:13:04.514 3.973 - 4.000: 99.6118% ( 1) 00:13:04.514 4.053 - 4.080: 99.6166% ( 1) 00:13:04.514 4.080 - 4.107: 99.6214% ( 1) 00:13:04.514 4.240 - 4.267: 99.6309% ( 2) 00:13:04.514 4.507 - 4.533: 99.6405% ( 2) 00:13:04.514 4.587 - 4.613: 99.6453% ( 1) 00:13:04.514 4.640 - 4.667: 99.6501% ( 1) 00:13:04.514 4.693 - 4.720: 99.6549% ( 1) 00:13:04.514 4.747 - 4.773: 99.6597% ( 1) 00:13:04.514 4.960 - 4.987: 99.6693% ( 2) 00:13:04.514 5.040 - 5.067: 99.6741% ( 1) 00:13:04.514 5.067 - 5.093: 99.6789% ( 1) 00:13:04.514 5.120 - 5.147: 99.6885% ( 2) 00:13:04.514 5.173 - 5.200: 99.6933% ( 1) 00:13:04.514 5.280 - 5.307: 99.6980% ( 1) 00:13:04.514 5.333 - 5.360: 99.7028% ( 1) 00:13:04.514 5.387 - 5.413: 99.7076% ( 1) 00:13:04.514 5.493 - 5.520: 99.7124% ( 1) 00:13:04.514 5.520 - 5.547: 99.7172% ( 1) 00:13:04.514 5.627 - 5.653: 99.7220% ( 1) 00:13:04.514 5.787 - 5.813: 99.7268% ( 1) 00:13:04.514 5.840 - 5.867: 99.7316% ( 1) 00:13:04.514 5.867 - 5.893: 99.7364% ( 1) 00:13:04.514 5.893 - 5.920: 99.7412% ( 1) 00:13:04.514 5.920 - 5.947: 99.7460% ( 1) 00:13:04.514 5.973 - 6.000: 99.7508% ( 1) 00:13:04.514 6.080 - 6.107: 99.7556% ( 1) 00:13:04.514 6.160 - 6.187: 99.7604% ( 1) 00:13:04.514 6.187 - 6.213: 99.7651% ( 1) 00:13:04.514 6.267 - 6.293: 99.7795% ( 3) 00:13:04.514 6.293 - 6.320: 99.7843% ( 1) 00:13:04.514 6.400 - 6.427: 99.7891% ( 1) 00:13:04.514 6.453 - 6.480: 99.7939% ( 1) 00:13:04.514 6.480 - 6.507: 99.7987% ( 1) 00:13:04.514 6.507 - 6.533: 99.8083% ( 2) 00:13:04.514 6.560 - 6.587: 99.8131% ( 1) 00:13:04.514 6.587 - 6.613: 99.8179% ( 1) 00:13:04.514 6.693 - 6.720: 99.8227% ( 1) 00:13:04.514 6.720 - 6.747: 99.8275% ( 1) 00:13:04.514 6.747 - 6.773: 99.8322% ( 1) 00:13:04.514 6.773 - 6.800: 99.8418% ( 2) 00:13:04.514 6.800 - 6.827: 99.8514% ( 2) 00:13:04.514 6.827 - 6.880: 99.8562% ( 1) 00:13:04.514 7.253 - 7.307: 99.8610% ( 1) 00:13:04.514 7.307 - 7.360: 99.8658% ( 1) 00:13:04.514 [2024-07-12 10:51:21.119873] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:04.514 7.360 - 7.413: 99.8754% ( 2) 00:13:04.514 7.413 - 7.467: 99.8802% ( 1) 00:13:04.514 7.520 - 7.573: 99.8850% ( 1) 00:13:04.514 7.573 - 7.627: 99.8898% ( 1) 00:13:04.514 7.787 - 7.840: 99.8946% ( 1) 00:13:04.514 8.000 - 8.053: 99.8993% ( 1) 00:13:04.515 8.693 - 8.747: 99.9041% ( 1) 00:13:04.515 12.213 - 12.267: 99.9089% ( 1) 00:13:04.515 3986.773 - 4014.080: 100.0000% ( 19) 00:13:04.515 00:13:04.515 Complete histogram 00:13:04.515 ================== 00:13:04.515 Range in us Cumulative Count 00:13:04.515 1.640 - 1.647: 0.0527% ( 11) 00:13:04.515 1.647 - 1.653: 0.9107% ( 179) 00:13:04.515 1.653 - 1.660: 1.0017% ( 19) 00:13:04.515 1.660 - 1.667: 1.1934% ( 40) 00:13:04.515 1.667 - 1.673: 1.3468% ( 32) 00:13:04.515 1.673 - 1.680: 1.3660% ( 4) 00:13:04.515 1.680 - 1.687: 1.3852% ( 4) 00:13:04.515 1.687 - 1.693: 25.0096% ( 4929) 00:13:04.515 1.693 - 1.700: 55.1572% ( 6290) 00:13:04.515 1.700 - 1.707: 58.3877% ( 674) 00:13:04.515 1.707 - 1.720: 76.6679% ( 3814) 00:13:04.515 1.720 - 1.733: 84.1162% ( 1554) 00:13:04.515 1.733 - 1.747: 84.8016% ( 143) 00:13:04.515 1.747 - 1.760: 88.1614% ( 701) 00:13:04.515 1.760 - 1.773: 93.1077% ( 1032) 00:13:04.515 1.773 - 1.787: 96.7839% ( 767) 00:13:04.515 1.787 - 1.800: 98.7059% ( 401) 00:13:04.515 1.800 - 1.813: 99.3002% ( 124) 00:13:04.515 1.813 - 1.827: 99.4009% ( 21) 00:13:04.515 1.827 - 1.840: 99.4057% ( 1) 00:13:04.515 1.840 - 1.853: 99.4105% ( 1) 00:13:04.515 1.867 - 1.880: 99.4153% ( 1) 00:13:04.515 1.920 - 1.933: 99.4201% ( 1) 00:13:04.515 2.067 - 2.080: 99.4248% ( 1) 00:13:04.515 4.347 - 4.373: 99.4344% ( 2) 00:13:04.515 4.427 - 4.453: 99.4392% ( 1) 00:13:04.515 4.533 - 4.560: 99.4440% ( 1) 00:13:04.515 4.747 - 4.773: 99.4536% ( 2) 00:13:04.515 4.773 - 4.800: 99.4632% ( 2) 00:13:04.515 4.800 - 4.827: 99.4680% ( 1) 00:13:04.515 4.933 - 4.960: 99.4776% ( 2) 00:13:04.515 5.013 - 5.040: 99.4872% ( 2) 00:13:04.515 5.040 - 5.067: 99.4919% ( 1) 00:13:04.515 5.067 - 5.093: 99.4967% ( 1) 00:13:04.515 5.200 - 5.227: 99.5015% ( 1) 00:13:04.515 5.307 - 5.333: 99.5063% ( 1) 00:13:04.515 5.333 - 5.360: 99.5111% ( 1) 00:13:04.515 5.413 - 5.440: 99.5159% ( 1) 00:13:04.515 5.440 - 5.467: 99.5255% ( 2) 00:13:04.515 5.520 - 5.547: 99.5303% ( 1) 00:13:04.515 5.653 - 5.680: 99.5351% ( 1) 00:13:04.515 5.680 - 5.707: 99.5399% ( 1) 00:13:04.515 5.733 - 5.760: 99.5447% ( 1) 00:13:04.515 5.760 - 5.787: 99.5495% ( 1) 00:13:04.515 5.947 - 5.973: 99.5543% ( 1) 00:13:04.515 6.293 - 6.320: 99.5590% ( 1) 00:13:04.515 6.320 - 6.347: 99.5638% ( 1) 00:13:04.515 6.533 - 6.560: 99.5686% ( 1) 00:13:04.515 7.200 - 7.253: 99.5734% ( 1) 00:13:04.515 9.067 - 9.120: 99.5782% ( 1) 00:13:04.515 11.520 - 11.573: 99.5830% ( 1) 00:13:04.515 11.573 - 11.627: 99.5878% ( 1) 00:13:04.515 12.693 - 12.747: 99.5926% ( 1) 00:13:04.515 15.360 - 15.467: 99.5974% ( 1) 00:13:04.515 31.573 - 31.787: 99.6022% ( 1) 00:13:04.515 33.920 - 34.133: 99.6070% ( 1) 00:13:04.515 40.533 - 40.747: 99.6118% ( 1) 00:13:04.515 983.040 - 989.867: 99.6166% ( 1) 00:13:04.515 3986.773 - 4014.080: 99.9952% ( 79) 00:13:04.515 6963.200 - 6990.507: 100.0000% ( 1) 00:13:04.515 00:13:04.515 10:51:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:04.515 10:51:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:04.515 10:51:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:04.515 10:51:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:04.515 10:51:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:04.515 [ 00:13:04.515 { 00:13:04.515 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:04.515 "subtype": "Discovery", 00:13:04.515 "listen_addresses": [], 00:13:04.515 "allow_any_host": true, 00:13:04.515 "hosts": [] 00:13:04.515 }, 00:13:04.515 { 00:13:04.515 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:04.515 "subtype": "NVMe", 00:13:04.515 "listen_addresses": [ 00:13:04.515 { 00:13:04.515 "trtype": "VFIOUSER", 00:13:04.515 "adrfam": "IPv4", 00:13:04.515 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:04.515 "trsvcid": "0" 00:13:04.515 } 00:13:04.515 ], 00:13:04.515 "allow_any_host": true, 00:13:04.515 "hosts": [], 00:13:04.515 "serial_number": "SPDK1", 00:13:04.515 "model_number": "SPDK bdev Controller", 00:13:04.515 "max_namespaces": 32, 00:13:04.515 "min_cntlid": 1, 00:13:04.515 "max_cntlid": 65519, 00:13:04.515 "namespaces": [ 00:13:04.515 { 00:13:04.515 "nsid": 1, 00:13:04.515 "bdev_name": "Malloc1", 00:13:04.515 "name": "Malloc1", 00:13:04.515 "nguid": "3A726C69B64745C69EBFD24F985C28DC", 00:13:04.515 "uuid": "3a726c69-b647-45c6-9ebf-d24f985c28dc" 00:13:04.515 }, 00:13:04.515 { 00:13:04.515 "nsid": 2, 00:13:04.515 "bdev_name": "Malloc3", 00:13:04.515 "name": "Malloc3", 00:13:04.515 "nguid": "49DDB6D893FF496496F89AA8380FD819", 00:13:04.515 "uuid": "49ddb6d8-93ff-4964-96f8-9aa8380fd819" 00:13:04.515 } 00:13:04.515 ] 00:13:04.515 }, 00:13:04.515 { 00:13:04.515 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:04.515 "subtype": "NVMe", 00:13:04.515 "listen_addresses": [ 00:13:04.515 { 00:13:04.515 "trtype": "VFIOUSER", 00:13:04.515 "adrfam": "IPv4", 00:13:04.515 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:04.515 "trsvcid": "0" 00:13:04.515 } 00:13:04.515 ], 00:13:04.515 "allow_any_host": true, 00:13:04.515 "hosts": [], 00:13:04.515 "serial_number": "SPDK2", 00:13:04.515 "model_number": "SPDK bdev Controller", 00:13:04.515 "max_namespaces": 32, 00:13:04.515 "min_cntlid": 1, 00:13:04.515 "max_cntlid": 65519, 00:13:04.515 "namespaces": [ 00:13:04.515 { 00:13:04.515 "nsid": 1, 00:13:04.515 "bdev_name": "Malloc2", 00:13:04.515 "name": "Malloc2", 00:13:04.515 "nguid": "78137F6CE0B64C549DBE31D9D16F6A84", 00:13:04.515 "uuid": "78137f6c-e0b6-4c54-9dbe-31d9d16f6a84" 00:13:04.515 } 00:13:04.515 ] 00:13:04.515 } 00:13:04.515 ] 00:13:04.515 10:51:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:04.515 10:51:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2010983 00:13:04.515 10:51:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:04.515 10:51:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:04.515 10:51:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:04.515 10:51:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:04.515 10:51:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:04.515 10:51:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:04.515 10:51:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:04.515 10:51:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:04.515 EAL: No free 2048 kB hugepages reported on node 1 00:13:04.515 [2024-07-12 10:51:21.466408] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:04.515 Malloc4 00:13:04.776 10:51:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:04.776 [2024-07-12 10:51:21.652693] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:04.776 10:51:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:04.776 Asynchronous Event Request test 00:13:04.776 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:04.776 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:04.776 Registering asynchronous event callbacks... 00:13:04.776 Starting namespace attribute notice tests for all controllers... 00:13:04.776 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:04.776 aer_cb - Changed Namespace 00:13:04.776 Cleaning up... 00:13:05.036 [ 00:13:05.036 { 00:13:05.036 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:05.036 "subtype": "Discovery", 00:13:05.036 "listen_addresses": [], 00:13:05.036 "allow_any_host": true, 00:13:05.036 "hosts": [] 00:13:05.036 }, 00:13:05.036 { 00:13:05.036 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:05.036 "subtype": "NVMe", 00:13:05.036 "listen_addresses": [ 00:13:05.036 { 00:13:05.036 "trtype": "VFIOUSER", 00:13:05.036 "adrfam": "IPv4", 00:13:05.036 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:05.036 "trsvcid": "0" 00:13:05.036 } 00:13:05.036 ], 00:13:05.036 "allow_any_host": true, 00:13:05.036 "hosts": [], 00:13:05.036 "serial_number": "SPDK1", 00:13:05.036 "model_number": "SPDK bdev Controller", 00:13:05.036 "max_namespaces": 32, 00:13:05.036 "min_cntlid": 1, 00:13:05.036 "max_cntlid": 65519, 00:13:05.036 "namespaces": [ 00:13:05.036 { 00:13:05.037 "nsid": 1, 00:13:05.037 "bdev_name": "Malloc1", 00:13:05.037 "name": "Malloc1", 00:13:05.037 "nguid": "3A726C69B64745C69EBFD24F985C28DC", 00:13:05.037 "uuid": "3a726c69-b647-45c6-9ebf-d24f985c28dc" 00:13:05.037 }, 00:13:05.037 { 00:13:05.037 "nsid": 2, 00:13:05.037 "bdev_name": "Malloc3", 00:13:05.037 "name": "Malloc3", 00:13:05.037 "nguid": "49DDB6D893FF496496F89AA8380FD819", 00:13:05.037 "uuid": "49ddb6d8-93ff-4964-96f8-9aa8380fd819" 00:13:05.037 } 00:13:05.037 ] 00:13:05.037 }, 00:13:05.037 { 00:13:05.037 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:05.037 "subtype": "NVMe", 00:13:05.037 "listen_addresses": [ 00:13:05.037 { 00:13:05.037 "trtype": "VFIOUSER", 00:13:05.037 "adrfam": "IPv4", 00:13:05.037 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:05.037 "trsvcid": "0" 00:13:05.037 } 00:13:05.037 ], 00:13:05.037 "allow_any_host": true, 00:13:05.037 "hosts": [], 00:13:05.037 "serial_number": "SPDK2", 00:13:05.037 "model_number": "SPDK bdev Controller", 00:13:05.037 "max_namespaces": 32, 00:13:05.037 "min_cntlid": 1, 00:13:05.037 "max_cntlid": 65519, 00:13:05.037 "namespaces": [ 00:13:05.037 { 00:13:05.037 "nsid": 1, 00:13:05.037 "bdev_name": "Malloc2", 00:13:05.037 "name": "Malloc2", 00:13:05.037 "nguid": "78137F6CE0B64C549DBE31D9D16F6A84", 00:13:05.037 "uuid": "78137f6c-e0b6-4c54-9dbe-31d9d16f6a84" 00:13:05.037 }, 00:13:05.037 { 00:13:05.037 "nsid": 2, 00:13:05.037 "bdev_name": "Malloc4", 00:13:05.037 "name": "Malloc4", 00:13:05.037 "nguid": "1A2B4C46CE0D450DABA4D9816AC9D97C", 00:13:05.037 "uuid": "1a2b4c46-ce0d-450d-aba4-d9816ac9d97c" 00:13:05.037 } 00:13:05.037 ] 00:13:05.037 } 00:13:05.037 ] 00:13:05.037 10:51:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2010983 00:13:05.037 10:51:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:05.037 10:51:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2001350 00:13:05.037 10:51:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2001350 ']' 00:13:05.037 10:51:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2001350 00:13:05.037 10:51:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:05.037 10:51:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:05.037 10:51:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2001350 00:13:05.037 10:51:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:05.037 10:51:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:05.037 10:51:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2001350' 00:13:05.037 killing process with pid 2001350 00:13:05.037 10:51:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2001350 00:13:05.037 10:51:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2001350 00:13:05.298 10:51:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:05.298 10:51:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:05.298 10:51:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:05.298 10:51:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:05.298 10:51:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:05.298 10:51:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2011024 00:13:05.298 10:51:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2011024' 00:13:05.298 Process pid: 2011024 00:13:05.298 10:51:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:05.298 10:51:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:05.298 10:51:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2011024 00:13:05.298 10:51:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2011024 ']' 00:13:05.298 10:51:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.298 10:51:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:05.298 10:51:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.298 10:51:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:05.298 10:51:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:05.298 [2024-07-12 10:51:22.105952] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:05.298 [2024-07-12 10:51:22.106861] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:05.298 [2024-07-12 10:51:22.106902] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:05.298 EAL: No free 2048 kB hugepages reported on node 1 00:13:05.298 [2024-07-12 10:51:22.180604] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:05.298 [2024-07-12 10:51:22.234622] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:05.298 [2024-07-12 10:51:22.234657] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:05.298 [2024-07-12 10:51:22.234663] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:05.298 [2024-07-12 10:51:22.234667] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:05.298 [2024-07-12 10:51:22.234671] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:05.298 [2024-07-12 10:51:22.234801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:05.298 [2024-07-12 10:51:22.234952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:05.298 [2024-07-12 10:51:22.235087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.298 [2024-07-12 10:51:22.235089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:05.559 [2024-07-12 10:51:22.301664] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:05.559 [2024-07-12 10:51:22.302615] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:05.559 [2024-07-12 10:51:22.302999] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:05.559 [2024-07-12 10:51:22.303423] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:05.559 [2024-07-12 10:51:22.303483] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:06.130 10:51:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:06.130 10:51:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:13:06.130 10:51:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:07.072 10:51:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:07.072 10:51:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:07.072 10:51:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:07.072 10:51:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:07.072 10:51:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:07.072 10:51:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:07.333 Malloc1 00:13:07.333 10:51:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:07.595 10:51:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:07.856 10:51:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:07.856 10:51:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:07.856 10:51:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:07.856 10:51:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:08.115 Malloc2 00:13:08.115 10:51:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:08.376 10:51:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:08.376 10:51:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:08.637 10:51:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:08.637 10:51:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2011024 00:13:08.637 10:51:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2011024 ']' 00:13:08.637 10:51:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2011024 00:13:08.637 10:51:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:08.637 10:51:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:08.637 10:51:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2011024 00:13:08.637 10:51:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:08.637 10:51:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:08.637 10:51:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2011024' 00:13:08.637 killing process with pid 2011024 00:13:08.637 10:51:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2011024 00:13:08.637 10:51:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2011024 00:13:08.899 10:51:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:08.899 10:51:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:08.899 00:13:08.899 real 0m50.178s 00:13:08.899 user 3m18.950s 00:13:08.899 sys 0m2.945s 00:13:08.899 10:51:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:08.899 10:51:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:08.899 ************************************ 00:13:08.899 END TEST nvmf_vfio_user 00:13:08.899 ************************************ 00:13:08.899 10:51:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:08.899 10:51:25 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:08.899 10:51:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:08.899 10:51:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:08.899 10:51:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:08.899 ************************************ 00:13:08.899 START TEST nvmf_vfio_user_nvme_compliance 00:13:08.899 ************************************ 00:13:08.899 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:08.899 * Looking for test storage... 00:13:08.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:08.899 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:08.899 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:08.899 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:08.899 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:08.899 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:08.899 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:08.899 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:08.899 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:08.899 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:08.899 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:08.899 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:08.899 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:08.899 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2011776 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2011776' 00:13:08.900 Process pid: 2011776 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2011776 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 2011776 ']' 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:08.900 10:51:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:09.161 [2024-07-12 10:51:25.894658] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:09.162 [2024-07-12 10:51:25.894727] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.162 EAL: No free 2048 kB hugepages reported on node 1 00:13:09.162 [2024-07-12 10:51:25.975197] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:09.162 [2024-07-12 10:51:26.037642] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.162 [2024-07-12 10:51:26.037680] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.162 [2024-07-12 10:51:26.037685] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:09.162 [2024-07-12 10:51:26.037690] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:09.162 [2024-07-12 10:51:26.037694] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.162 [2024-07-12 10:51:26.037830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.162 [2024-07-12 10:51:26.037981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.162 [2024-07-12 10:51:26.037983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:09.733 10:51:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:09.733 10:51:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:13:09.733 10:51:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:11.118 10:51:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:11.118 10:51:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:11.118 10:51:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:11.118 10:51:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.118 10:51:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:11.118 10:51:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.118 10:51:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:11.118 10:51:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:11.118 10:51:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.118 10:51:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:11.118 malloc0 00:13:11.118 10:51:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.118 10:51:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:11.118 10:51:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.118 10:51:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:11.118 10:51:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.118 10:51:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:11.118 10:51:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.118 10:51:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:11.118 10:51:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.118 10:51:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:11.118 10:51:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.118 10:51:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:11.118 10:51:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.118 10:51:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:11.118 EAL: No free 2048 kB hugepages reported on node 1 00:13:11.118 00:13:11.118 00:13:11.118 CUnit - A unit testing framework for C - Version 2.1-3 00:13:11.118 http://cunit.sourceforge.net/ 00:13:11.118 00:13:11.118 00:13:11.118 Suite: nvme_compliance 00:13:11.118 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-12 10:51:27.913587] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:11.118 [2024-07-12 10:51:27.914885] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:11.118 [2024-07-12 10:51:27.914896] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:11.118 [2024-07-12 10:51:27.914900] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:11.118 [2024-07-12 10:51:27.916605] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:11.118 passed 00:13:11.118 Test: admin_identify_ctrlr_verify_fused ...[2024-07-12 10:51:27.992066] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:11.118 [2024-07-12 10:51:27.995086] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:11.118 passed 00:13:11.118 Test: admin_identify_ns ...[2024-07-12 10:51:28.071478] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:11.379 [2024-07-12 10:51:28.135144] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:11.379 [2024-07-12 10:51:28.143134] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:11.379 [2024-07-12 10:51:28.164210] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:11.379 passed 00:13:11.379 Test: admin_get_features_mandatory_features ...[2024-07-12 10:51:28.238415] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:11.379 [2024-07-12 10:51:28.241440] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:11.379 passed 00:13:11.379 Test: admin_get_features_optional_features ...[2024-07-12 10:51:28.317898] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:11.379 [2024-07-12 10:51:28.320922] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:11.379 passed 00:13:11.639 Test: admin_set_features_number_of_queues ...[2024-07-12 10:51:28.396653] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:11.639 [2024-07-12 10:51:28.501200] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:11.639 passed 00:13:11.639 Test: admin_get_log_page_mandatory_logs ...[2024-07-12 10:51:28.574385] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:11.639 [2024-07-12 10:51:28.577415] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:11.639 passed 00:13:11.899 Test: admin_get_log_page_with_lpo ...[2024-07-12 10:51:28.654144] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:11.899 [2024-07-12 10:51:28.723128] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:11.899 [2024-07-12 10:51:28.736177] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:11.899 passed 00:13:11.899 Test: fabric_property_get ...[2024-07-12 10:51:28.810373] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:11.899 [2024-07-12 10:51:28.811570] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:11.899 [2024-07-12 10:51:28.813389] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:11.899 passed 00:13:12.159 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-12 10:51:28.888824] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:12.159 [2024-07-12 10:51:28.890016] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:12.159 [2024-07-12 10:51:28.891844] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:12.159 passed 00:13:12.159 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-12 10:51:28.966559] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:12.159 [2024-07-12 10:51:29.050145] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:12.159 [2024-07-12 10:51:29.066125] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:12.159 [2024-07-12 10:51:29.071202] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:12.159 passed 00:13:12.419 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-12 10:51:29.146238] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:12.419 [2024-07-12 10:51:29.147438] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:12.419 [2024-07-12 10:51:29.149255] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:12.419 passed 00:13:12.419 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-12 10:51:29.226487] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:12.419 [2024-07-12 10:51:29.302136] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:12.419 [2024-07-12 10:51:29.326136] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:12.419 [2024-07-12 10:51:29.331200] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:12.419 passed 00:13:12.679 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-12 10:51:29.404389] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:12.679 [2024-07-12 10:51:29.405585] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:12.679 [2024-07-12 10:51:29.405603] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:12.679 [2024-07-12 10:51:29.407408] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:12.679 passed 00:13:12.679 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-12 10:51:29.482473] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:12.679 [2024-07-12 10:51:29.578134] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:12.679 [2024-07-12 10:51:29.586131] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:12.679 [2024-07-12 10:51:29.594130] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:12.679 [2024-07-12 10:51:29.602127] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:12.679 [2024-07-12 10:51:29.631203] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:12.679 passed 00:13:12.940 Test: admin_create_io_sq_verify_pc ...[2024-07-12 10:51:29.703386] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:12.940 [2024-07-12 10:51:29.722134] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:12.940 [2024-07-12 10:51:29.739538] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:12.940 passed 00:13:12.940 Test: admin_create_io_qp_max_qps ...[2024-07-12 10:51:29.814952] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:14.323 [2024-07-12 10:51:30.923128] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:14.584 [2024-07-12 10:51:31.312700] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:14.584 passed 00:13:14.584 Test: admin_create_io_sq_shared_cq ...[2024-07-12 10:51:31.386485] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:14.584 [2024-07-12 10:51:31.525129] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:14.584 [2024-07-12 10:51:31.562177] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:14.845 passed 00:13:14.845 00:13:14.845 Run Summary: Type Total Ran Passed Failed Inactive 00:13:14.845 suites 1 1 n/a 0 0 00:13:14.845 tests 18 18 18 0 0 00:13:14.845 asserts 360 360 360 0 n/a 00:13:14.845 00:13:14.845 Elapsed time = 1.502 seconds 00:13:14.845 10:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2011776 00:13:14.845 10:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 2011776 ']' 00:13:14.845 10:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 2011776 00:13:14.845 10:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:13:14.845 10:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:14.845 10:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2011776 00:13:14.845 10:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:14.845 10:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:14.845 10:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2011776' 00:13:14.845 killing process with pid 2011776 00:13:14.845 10:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 2011776 00:13:14.845 10:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 2011776 00:13:14.845 10:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:14.845 10:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:14.845 00:13:14.845 real 0m6.079s 00:13:14.845 user 0m17.363s 00:13:14.845 sys 0m0.479s 00:13:14.845 10:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:14.845 10:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:14.845 ************************************ 00:13:14.845 END TEST nvmf_vfio_user_nvme_compliance 00:13:14.845 ************************************ 00:13:14.845 10:51:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:14.845 10:51:31 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:14.845 10:51:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:14.845 10:51:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:14.845 10:51:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:15.107 ************************************ 00:13:15.107 START TEST nvmf_vfio_user_fuzz 00:13:15.107 ************************************ 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:15.107 * Looking for test storage... 00:13:15.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2013162 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2013162' 00:13:15.107 Process pid: 2013162 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2013162 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 2013162 ']' 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:15.107 10:51:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:16.048 10:51:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:16.048 10:51:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:13:16.048 10:51:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:16.988 10:51:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:16.988 10:51:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.988 10:51:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:16.988 10:51:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.988 10:51:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:16.988 10:51:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:16.988 10:51:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.988 10:51:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:16.988 malloc0 00:13:16.988 10:51:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.988 10:51:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:16.988 10:51:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.988 10:51:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:16.988 10:51:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.988 10:51:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:16.988 10:51:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.988 10:51:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:16.988 10:51:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.988 10:51:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:16.988 10:51:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.988 10:51:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:16.988 10:51:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.988 10:51:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:16.988 10:51:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:49.180 Fuzzing completed. Shutting down the fuzz application 00:13:49.180 00:13:49.180 Dumping successful admin opcodes: 00:13:49.180 8, 9, 10, 24, 00:13:49.180 Dumping successful io opcodes: 00:13:49.180 0, 00:13:49.180 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1395819, total successful commands: 5480, random_seed: 4089364224 00:13:49.180 NS: 0x200003a1ef00 admin qp, Total commands completed: 334511, total successful commands: 2690, random_seed: 1573840192 00:13:49.180 10:52:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:49.180 10:52:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.180 10:52:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:49.180 10:52:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.180 10:52:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2013162 00:13:49.180 10:52:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 2013162 ']' 00:13:49.180 10:52:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 2013162 00:13:49.180 10:52:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:13:49.180 10:52:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:49.180 10:52:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2013162 00:13:49.180 10:52:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:49.180 10:52:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:49.180 10:52:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2013162' 00:13:49.180 killing process with pid 2013162 00:13:49.180 10:52:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 2013162 00:13:49.180 10:52:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 2013162 00:13:49.180 10:52:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:49.180 10:52:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:49.180 00:13:49.180 real 0m32.698s 00:13:49.180 user 0m36.969s 00:13:49.180 sys 0m24.187s 00:13:49.180 10:52:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:49.180 10:52:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:49.180 ************************************ 00:13:49.180 END TEST nvmf_vfio_user_fuzz 00:13:49.180 ************************************ 00:13:49.180 10:52:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:49.180 10:52:04 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:49.180 10:52:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:49.180 10:52:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:49.180 10:52:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:49.180 ************************************ 00:13:49.180 START TEST nvmf_host_management 00:13:49.180 ************************************ 00:13:49.180 10:52:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:49.180 * Looking for test storage... 00:13:49.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:13:49.181 10:52:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:55.769 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:55.769 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:55.769 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:55.769 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:55.769 10:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:55.769 10:52:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:55.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:55.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:13:55.769 00:13:55.769 --- 10.0.0.2 ping statistics --- 00:13:55.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.769 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:13:55.769 10:52:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:55.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:55.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.484 ms 00:13:55.769 00:13:55.769 --- 10.0.0.1 ping statistics --- 00:13:55.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.769 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:13:55.769 10:52:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:55.769 10:52:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:13:55.769 10:52:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:55.769 10:52:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:55.769 10:52:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:55.769 10:52:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:55.769 10:52:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:55.769 10:52:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:55.769 10:52:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:55.769 10:52:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:13:55.769 10:52:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:13:55.769 10:52:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:55.769 10:52:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:55.769 10:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:55.769 10:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:55.769 10:52:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2023151 00:13:55.769 10:52:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2023151 00:13:55.769 10:52:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:55.769 10:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2023151 ']' 00:13:55.769 10:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.770 10:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:55.770 10:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.770 10:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:55.770 10:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:55.770 [2024-07-12 10:52:12.144278] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:55.770 [2024-07-12 10:52:12.144336] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.770 EAL: No free 2048 kB hugepages reported on node 1 00:13:55.770 [2024-07-12 10:52:12.229884] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:55.770 [2024-07-12 10:52:12.325755] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:55.770 [2024-07-12 10:52:12.325805] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:55.770 [2024-07-12 10:52:12.325814] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:55.770 [2024-07-12 10:52:12.325821] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:55.770 [2024-07-12 10:52:12.325827] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:55.770 [2024-07-12 10:52:12.325988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:55.770 [2024-07-12 10:52:12.326156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:55.770 [2024-07-12 10:52:12.326392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:55.770 [2024-07-12 10:52:12.326392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:56.030 10:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:56.030 10:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:56.030 10:52:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:56.030 10:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:56.030 10:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:56.030 10:52:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.030 10:52:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:56.030 10:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.030 10:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:56.030 [2024-07-12 10:52:12.995357] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:56.030 10:52:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.030 10:52:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:56.030 10:52:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:56.030 10:52:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:56.030 10:52:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:56.291 10:52:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:13:56.291 10:52:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:13:56.291 10:52:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.291 10:52:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:56.291 Malloc0 00:13:56.291 [2024-07-12 10:52:13.064836] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:56.291 10:52:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.291 10:52:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:56.291 10:52:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:56.291 10:52:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:56.291 10:52:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2023521 00:13:56.291 10:52:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2023521 /var/tmp/bdevperf.sock 00:13:56.291 10:52:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2023521 ']' 00:13:56.291 10:52:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:56.291 10:52:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:56.291 10:52:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:56.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:56.291 10:52:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:56.291 10:52:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:56.291 10:52:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:56.291 10:52:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:56.291 10:52:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:56.291 10:52:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:56.291 10:52:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:56.291 10:52:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:56.291 { 00:13:56.291 "params": { 00:13:56.291 "name": "Nvme$subsystem", 00:13:56.291 "trtype": "$TEST_TRANSPORT", 00:13:56.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:56.291 "adrfam": "ipv4", 00:13:56.291 "trsvcid": "$NVMF_PORT", 00:13:56.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:56.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:56.291 "hdgst": ${hdgst:-false}, 00:13:56.291 "ddgst": ${ddgst:-false} 00:13:56.291 }, 00:13:56.291 "method": "bdev_nvme_attach_controller" 00:13:56.291 } 00:13:56.291 EOF 00:13:56.291 )") 00:13:56.291 10:52:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:56.291 10:52:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:56.291 10:52:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:56.291 10:52:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:56.291 "params": { 00:13:56.291 "name": "Nvme0", 00:13:56.291 "trtype": "tcp", 00:13:56.291 "traddr": "10.0.0.2", 00:13:56.291 "adrfam": "ipv4", 00:13:56.291 "trsvcid": "4420", 00:13:56.291 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:56.291 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:56.291 "hdgst": false, 00:13:56.291 "ddgst": false 00:13:56.291 }, 00:13:56.291 "method": "bdev_nvme_attach_controller" 00:13:56.291 }' 00:13:56.291 [2024-07-12 10:52:13.172713] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:56.291 [2024-07-12 10:52:13.172787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2023521 ] 00:13:56.291 EAL: No free 2048 kB hugepages reported on node 1 00:13:56.291 [2024-07-12 10:52:13.254851] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.551 [2024-07-12 10:52:13.351819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.812 Running I/O for 10 seconds... 00:13:57.075 10:52:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:57.076 10:52:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:57.076 10:52:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:57.076 10:52:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.076 10:52:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:57.076 10:52:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.076 10:52:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:57.076 10:52:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:57.076 10:52:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:57.076 10:52:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:57.076 10:52:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:13:57.076 10:52:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:13:57.076 10:52:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:57.076 10:52:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:57.076 10:52:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:57.076 10:52:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:57.076 10:52:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.076 10:52:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:57.076 10:52:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.076 10:52:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=451 00:13:57.076 10:52:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:13:57.076 10:52:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:13:57.076 10:52:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:13:57.076 10:52:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:13:57.076 10:52:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:57.076 10:52:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.076 10:52:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:57.076 [2024-07-12 10:52:14.045185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045271] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045278] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045342] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045355] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045397] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045523] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045544] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045612] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045648] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045675] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.045681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1267e40 is same with the state(5) to be set 00:13:57.076 [2024-07-12 10:52:14.046439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.076 [2024-07-12 10:52:14.046497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.046522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.046530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.046541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.046550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.046560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.046568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.046578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.046587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.046597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.046605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.046616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.046624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.046634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.046642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.046652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.046668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.046678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.046686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.046696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.046705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.046715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.046723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.046734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.046742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.046753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.046762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.046773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.046782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.046793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.046802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.046813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.046822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.046833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.046841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.046853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.046862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.046873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.046883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.046893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.046903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.046916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.046925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.046935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.046944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.046954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.046964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.046974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.046983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.046994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.047003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.047014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.047023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.047034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.047043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.047054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.047062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.047073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.047082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.047092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.047101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.047114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.047131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.047142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.047151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.047162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.047173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.047184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.047193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.047203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.047212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.047223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.047231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.047242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.047251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.047262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.047271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.047283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.047292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.077 [2024-07-12 10:52:14.047303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.077 [2024-07-12 10:52:14.047312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.078 [2024-07-12 10:52:14.047323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.078 [2024-07-12 10:52:14.047331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.078 [2024-07-12 10:52:14.047342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.078 [2024-07-12 10:52:14.047351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.078 [2024-07-12 10:52:14.047362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.078 [2024-07-12 10:52:14.047371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.078 [2024-07-12 10:52:14.047382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.078 [2024-07-12 10:52:14.047392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.078 [2024-07-12 10:52:14.047402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.078 [2024-07-12 10:52:14.047411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.078 [2024-07-12 10:52:14.047425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.078 [2024-07-12 10:52:14.047434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.078 [2024-07-12 10:52:14.047444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.078 [2024-07-12 10:52:14.047453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.078 [2024-07-12 10:52:14.047464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.078 [2024-07-12 10:52:14.047473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.078 [2024-07-12 10:52:14.047483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.078 [2024-07-12 10:52:14.047493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.078 [2024-07-12 10:52:14.047504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.078 [2024-07-12 10:52:14.047512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.078 [2024-07-12 10:52:14.047523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.078 [2024-07-12 10:52:14.047531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.078 [2024-07-12 10:52:14.047542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.078 [2024-07-12 10:52:14.047552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.078 [2024-07-12 10:52:14.047563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.078 [2024-07-12 10:52:14.047571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.078 [2024-07-12 10:52:14.047582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.078 [2024-07-12 10:52:14.047591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.078 [2024-07-12 10:52:14.047602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.078 [2024-07-12 10:52:14.047611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.078 [2024-07-12 10:52:14.047622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.078 [2024-07-12 10:52:14.047631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.078 [2024-07-12 10:52:14.047642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.078 [2024-07-12 10:52:14.047651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.078 [2024-07-12 10:52:14.047661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.078 [2024-07-12 10:52:14.047672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.078 [2024-07-12 10:52:14.047684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.078 [2024-07-12 10:52:14.047692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.078 [2024-07-12 10:52:14.047703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.078 [2024-07-12 10:52:14.047712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.078 [2024-07-12 10:52:14.047723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.078 [2024-07-12 10:52:14.047731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.078 [2024-07-12 10:52:14.047741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.078 [2024-07-12 10:52:14.047751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.078 [2024-07-12 10:52:14.047762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:57.078 [2024-07-12 10:52:14.047771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.078 [2024-07-12 10:52:14.047781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0c4f0 is same with the state(5) to be set 00:13:57.078 [2024-07-12 10:52:14.047848] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb0c4f0 was disconnected and freed. reset controller. 00:13:57.078 [2024-07-12 10:52:14.049090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:57.078 10:52:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.078 10:52:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:57.078 10:52:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.078 10:52:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:57.078 task offset: 65536 on job bdev=Nvme0n1 fails 00:13:57.078 00:13:57.078 Latency(us) 00:13:57.078 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.078 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:57.078 Job: Nvme0n1 ended in about 0.45 seconds with error 00:13:57.078 Verification LBA range: start 0x0 length 0x400 00:13:57.078 Nvme0n1 : 0.45 1148.50 71.78 143.56 0.00 48176.69 7591.25 38666.24 00:13:57.078 =================================================================================================================== 00:13:57.078 Total : 1148.50 71.78 143.56 0.00 48176.69 7591.25 38666.24 00:13:57.078 [2024-07-12 10:52:14.051371] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:57.078 [2024-07-12 10:52:14.051419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6fb3b0 (9): Bad file descriptor 00:13:57.339 10:52:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.339 10:52:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:13:57.339 [2024-07-12 10:52:14.143609] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:58.282 10:52:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2023521 00:13:58.282 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2023521) - No such process 00:13:58.282 10:52:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:13:58.282 10:52:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:58.282 10:52:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:58.282 10:52:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:58.282 10:52:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:58.282 10:52:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:58.282 10:52:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:58.282 10:52:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:58.282 { 00:13:58.282 "params": { 00:13:58.282 "name": "Nvme$subsystem", 00:13:58.282 "trtype": "$TEST_TRANSPORT", 00:13:58.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:58.282 "adrfam": "ipv4", 00:13:58.282 "trsvcid": "$NVMF_PORT", 00:13:58.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:58.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:58.282 "hdgst": ${hdgst:-false}, 00:13:58.282 "ddgst": ${ddgst:-false} 00:13:58.282 }, 00:13:58.282 "method": "bdev_nvme_attach_controller" 00:13:58.282 } 00:13:58.282 EOF 00:13:58.282 )") 00:13:58.282 10:52:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:58.282 10:52:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:58.282 10:52:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:58.282 10:52:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:58.282 "params": { 00:13:58.282 "name": "Nvme0", 00:13:58.282 "trtype": "tcp", 00:13:58.282 "traddr": "10.0.0.2", 00:13:58.282 "adrfam": "ipv4", 00:13:58.282 "trsvcid": "4420", 00:13:58.282 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:58.282 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:58.282 "hdgst": false, 00:13:58.282 "ddgst": false 00:13:58.282 }, 00:13:58.282 "method": "bdev_nvme_attach_controller" 00:13:58.282 }' 00:13:58.282 [2024-07-12 10:52:15.119057] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:58.282 [2024-07-12 10:52:15.119113] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2023872 ] 00:13:58.282 EAL: No free 2048 kB hugepages reported on node 1 00:13:58.282 [2024-07-12 10:52:15.196252] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.282 [2024-07-12 10:52:15.259501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.543 Running I/O for 1 seconds... 00:13:59.928 00:13:59.928 Latency(us) 00:13:59.928 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:59.928 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:59.928 Verification LBA range: start 0x0 length 0x400 00:13:59.928 Nvme0n1 : 1.03 1311.55 81.97 0.00 0.00 48038.57 4532.91 38884.69 00:13:59.928 =================================================================================================================== 00:13:59.928 Total : 1311.55 81.97 0.00 0.00 48038.57 4532.91 38884.69 00:13:59.928 10:52:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:13:59.928 10:52:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:59.928 10:52:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:59.928 10:52:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:59.928 10:52:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:13:59.928 10:52:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:59.928 10:52:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:13:59.928 10:52:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:59.928 10:52:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:13:59.928 10:52:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:59.928 10:52:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:59.928 rmmod nvme_tcp 00:13:59.928 rmmod nvme_fabrics 00:13:59.928 rmmod nvme_keyring 00:13:59.928 10:52:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:59.928 10:52:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:13:59.928 10:52:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:13:59.928 10:52:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2023151 ']' 00:13:59.928 10:52:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2023151 00:13:59.928 10:52:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 2023151 ']' 00:13:59.928 10:52:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 2023151 00:13:59.928 10:52:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:13:59.928 10:52:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:59.928 10:52:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2023151 00:13:59.928 10:52:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:59.928 10:52:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:59.928 10:52:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2023151' 00:13:59.928 killing process with pid 2023151 00:13:59.928 10:52:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 2023151 00:13:59.928 10:52:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 2023151 00:14:00.190 [2024-07-12 10:52:16.932829] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:00.190 10:52:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:00.190 10:52:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:00.190 10:52:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:00.190 10:52:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:00.190 10:52:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:00.190 10:52:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.190 10:52:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:00.190 10:52:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.104 10:52:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:02.104 10:52:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:14:02.104 00:14:02.104 real 0m14.400s 00:14:02.104 user 0m23.187s 00:14:02.104 sys 0m6.540s 00:14:02.104 10:52:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:02.104 10:52:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:02.104 ************************************ 00:14:02.104 END TEST nvmf_host_management 00:14:02.104 ************************************ 00:14:02.105 10:52:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:02.105 10:52:19 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:02.105 10:52:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:02.105 10:52:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:02.105 10:52:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:02.366 ************************************ 00:14:02.366 START TEST nvmf_lvol 00:14:02.366 ************************************ 00:14:02.366 10:52:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:02.366 * Looking for test storage... 00:14:02.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:02.366 10:52:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:02.366 10:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:14:02.366 10:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.366 10:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.366 10:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.366 10:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.366 10:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:02.366 10:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:02.366 10:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.366 10:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:02.366 10:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:02.366 10:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:02.366 10:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:02.366 10:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:02.366 10:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:02.366 10:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:02.366 10:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:02.366 10:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:02.366 10:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:02.367 10:52:19 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.367 10:52:19 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.367 10:52:19 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.367 10:52:19 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.367 10:52:19 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.367 10:52:19 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.367 10:52:19 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:14:02.367 10:52:19 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.367 10:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:14:02.367 10:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:02.367 10:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:02.367 10:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:02.367 10:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:02.367 10:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:02.367 10:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:02.367 10:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:02.367 10:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:02.367 10:52:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:02.367 10:52:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:02.367 10:52:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:02.367 10:52:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:02.367 10:52:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:02.367 10:52:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:02.367 10:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:02.367 10:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:02.367 10:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:02.367 10:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:02.367 10:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:02.367 10:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.367 10:52:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:02.367 10:52:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.367 10:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:02.367 10:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:02.367 10:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:14:02.367 10:52:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:10.513 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:10.513 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:10.513 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:10.514 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:10.514 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:10.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:10.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.527 ms 00:14:10.514 00:14:10.514 --- 10.0.0.2 ping statistics --- 00:14:10.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.514 rtt min/avg/max/mdev = 0.527/0.527/0.527/0.000 ms 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:10.514 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:10.514 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.359 ms 00:14:10.514 00:14:10.514 --- 10.0.0.1 ping statistics --- 00:14:10.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.514 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2028247 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2028247 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 2028247 ']' 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:10.514 10:52:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:10.514 [2024-07-12 10:52:26.607461] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:10.514 [2024-07-12 10:52:26.607529] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.514 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.514 [2024-07-12 10:52:26.698207] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:10.514 [2024-07-12 10:52:26.795028] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.514 [2024-07-12 10:52:26.795086] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.514 [2024-07-12 10:52:26.795094] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:10.514 [2024-07-12 10:52:26.795101] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:10.514 [2024-07-12 10:52:26.795106] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.514 [2024-07-12 10:52:26.795240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.514 [2024-07-12 10:52:26.795384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.514 [2024-07-12 10:52:26.795386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:10.514 10:52:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:10.514 10:52:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:14:10.514 10:52:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:10.514 10:52:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:10.514 10:52:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:10.514 10:52:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.514 10:52:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:10.775 [2024-07-12 10:52:27.601439] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:10.775 10:52:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:11.036 10:52:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:11.036 10:52:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:11.296 10:52:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:11.296 10:52:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:11.296 10:52:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:11.556 10:52:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=4031ab9c-aef2-47a3-ad9b-3d4300a6c651 00:14:11.556 10:52:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4031ab9c-aef2-47a3-ad9b-3d4300a6c651 lvol 20 00:14:11.815 10:52:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=39913e71-4052-484e-a4ba-9eb80eb3e05a 00:14:11.815 10:52:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:12.075 10:52:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 39913e71-4052-484e-a4ba-9eb80eb3e05a 00:14:12.075 10:52:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:12.335 [2024-07-12 10:52:29.096446] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:12.335 10:52:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:12.335 10:52:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2028908 00:14:12.335 10:52:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:12.335 10:52:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:12.596 EAL: No free 2048 kB hugepages reported on node 1 00:14:13.541 10:52:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 39913e71-4052-484e-a4ba-9eb80eb3e05a MY_SNAPSHOT 00:14:13.541 10:52:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=b8e95f5d-ecc1-48e9-b53a-0f5d9fb35963 00:14:13.541 10:52:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 39913e71-4052-484e-a4ba-9eb80eb3e05a 30 00:14:13.802 10:52:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone b8e95f5d-ecc1-48e9-b53a-0f5d9fb35963 MY_CLONE 00:14:14.063 10:52:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=95e1fa89-d1f9-48e8-9e5d-774173b5dd4a 00:14:14.063 10:52:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 95e1fa89-d1f9-48e8-9e5d-774173b5dd4a 00:14:14.323 10:52:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2028908 00:14:24.321 Initializing NVMe Controllers 00:14:24.321 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:24.321 Controller IO queue size 128, less than required. 00:14:24.321 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:24.321 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:24.321 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:24.321 Initialization complete. Launching workers. 00:14:24.321 ======================================================== 00:14:24.321 Latency(us) 00:14:24.321 Device Information : IOPS MiB/s Average min max 00:14:24.321 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16462.70 64.31 7778.25 1638.11 61146.63 00:14:24.321 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17744.90 69.32 7215.41 1302.42 47749.74 00:14:24.321 ======================================================== 00:14:24.321 Total : 34207.60 133.62 7486.28 1302.42 61146.63 00:14:24.321 00:14:24.321 10:52:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:24.321 10:52:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 39913e71-4052-484e-a4ba-9eb80eb3e05a 00:14:24.321 10:52:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4031ab9c-aef2-47a3-ad9b-3d4300a6c651 00:14:24.321 10:52:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:24.321 10:52:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:24.321 10:52:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:24.321 10:52:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:24.321 10:52:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:14:24.321 10:52:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:24.321 10:52:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:14:24.321 10:52:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:24.321 10:52:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:24.321 rmmod nvme_tcp 00:14:24.321 rmmod nvme_fabrics 00:14:24.321 rmmod nvme_keyring 00:14:24.321 10:52:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:24.321 10:52:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:14:24.321 10:52:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:14:24.321 10:52:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2028247 ']' 00:14:24.321 10:52:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2028247 00:14:24.321 10:52:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 2028247 ']' 00:14:24.321 10:52:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 2028247 00:14:24.321 10:52:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:14:24.321 10:52:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:24.321 10:52:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2028247 00:14:24.321 10:52:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:24.321 10:52:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:24.321 10:52:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2028247' 00:14:24.321 killing process with pid 2028247 00:14:24.321 10:52:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 2028247 00:14:24.321 10:52:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 2028247 00:14:24.321 10:52:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:24.321 10:52:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:24.321 10:52:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:24.321 10:52:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:24.321 10:52:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:24.321 10:52:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.321 10:52:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:24.321 10:52:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.706 10:52:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:25.706 00:14:25.706 real 0m23.408s 00:14:25.706 user 1m4.039s 00:14:25.706 sys 0m8.009s 00:14:25.706 10:52:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:25.706 10:52:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:25.706 ************************************ 00:14:25.706 END TEST nvmf_lvol 00:14:25.706 ************************************ 00:14:25.706 10:52:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:25.706 10:52:42 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:25.706 10:52:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:25.706 10:52:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:25.706 10:52:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:25.706 ************************************ 00:14:25.706 START TEST nvmf_lvs_grow 00:14:25.706 ************************************ 00:14:25.706 10:52:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:25.967 * Looking for test storage... 00:14:25.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:25.967 10:52:42 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:25.967 10:52:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:14:25.967 10:52:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:14:25.968 10:52:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:34.153 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:34.153 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:34.153 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:34.153 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:34.153 10:52:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:34.153 10:52:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:34.153 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:34.153 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:14:34.153 00:14:34.153 --- 10.0.0.2 ping statistics --- 00:14:34.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:34.153 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:14:34.153 10:52:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:34.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:34.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.356 ms 00:14:34.154 00:14:34.154 --- 10.0.0.1 ping statistics --- 00:14:34.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:34.154 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:14:34.154 10:52:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:34.154 10:52:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:14:34.154 10:52:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:34.154 10:52:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:34.154 10:52:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:34.154 10:52:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:34.154 10:52:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:34.154 10:52:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:34.154 10:52:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:34.154 10:52:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:34.154 10:52:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:34.154 10:52:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:34.154 10:52:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:34.154 10:52:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2035252 00:14:34.154 10:52:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2035252 00:14:34.154 10:52:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:34.154 10:52:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 2035252 ']' 00:14:34.154 10:52:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.154 10:52:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:34.154 10:52:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.154 10:52:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:34.154 10:52:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:34.154 [2024-07-12 10:52:50.141340] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:34.154 [2024-07-12 10:52:50.141409] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:34.154 EAL: No free 2048 kB hugepages reported on node 1 00:14:34.154 [2024-07-12 10:52:50.229732] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.154 [2024-07-12 10:52:50.323797] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:34.154 [2024-07-12 10:52:50.323855] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:34.154 [2024-07-12 10:52:50.323863] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:34.154 [2024-07-12 10:52:50.323871] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:34.154 [2024-07-12 10:52:50.323877] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:34.154 [2024-07-12 10:52:50.323906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.154 10:52:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:34.154 10:52:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:14:34.154 10:52:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:34.154 10:52:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:34.154 10:52:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:34.154 10:52:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:34.154 10:52:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:34.415 [2024-07-12 10:52:51.148301] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:34.415 10:52:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:14:34.415 10:52:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:34.415 10:52:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:34.415 10:52:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:34.415 ************************************ 00:14:34.415 START TEST lvs_grow_clean 00:14:34.415 ************************************ 00:14:34.415 10:52:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:14:34.415 10:52:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:34.415 10:52:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:34.415 10:52:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:34.415 10:52:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:34.415 10:52:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:34.415 10:52:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:34.415 10:52:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:34.415 10:52:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:34.415 10:52:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:34.676 10:52:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:34.676 10:52:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:34.676 10:52:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f9909070-439a-42d5-a21b-0b9464c39793 00:14:34.676 10:52:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9909070-439a-42d5-a21b-0b9464c39793 00:14:34.676 10:52:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:34.937 10:52:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:34.937 10:52:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:34.937 10:52:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f9909070-439a-42d5-a21b-0b9464c39793 lvol 150 00:14:35.198 10:52:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=79455d77-5c02-496c-aa7d-0ececa4750f1 00:14:35.198 10:52:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:35.198 10:52:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:35.198 [2024-07-12 10:52:52.110519] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:35.198 [2024-07-12 10:52:52.110587] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:35.198 true 00:14:35.198 10:52:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9909070-439a-42d5-a21b-0b9464c39793 00:14:35.198 10:52:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:35.460 10:52:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:35.460 10:52:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:35.721 10:52:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 79455d77-5c02-496c-aa7d-0ececa4750f1 00:14:35.721 10:52:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:35.983 [2024-07-12 10:52:52.752571] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:35.983 10:52:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:35.983 10:52:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2035751 00:14:35.983 10:52:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:35.983 10:52:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:35.983 10:52:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2035751 /var/tmp/bdevperf.sock 00:14:35.983 10:52:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 2035751 ']' 00:14:35.983 10:52:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:35.983 10:52:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:35.983 10:52:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:35.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:35.983 10:52:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:35.983 10:52:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:36.249 [2024-07-12 10:52:52.988914] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:36.249 [2024-07-12 10:52:52.988983] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2035751 ] 00:14:36.249 EAL: No free 2048 kB hugepages reported on node 1 00:14:36.249 [2024-07-12 10:52:53.069483] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.249 [2024-07-12 10:52:53.164089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.820 10:52:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:36.820 10:52:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:14:36.820 10:52:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:37.391 Nvme0n1 00:14:37.391 10:52:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:37.391 [ 00:14:37.391 { 00:14:37.391 "name": "Nvme0n1", 00:14:37.391 "aliases": [ 00:14:37.391 "79455d77-5c02-496c-aa7d-0ececa4750f1" 00:14:37.391 ], 00:14:37.391 "product_name": "NVMe disk", 00:14:37.391 "block_size": 4096, 00:14:37.391 "num_blocks": 38912, 00:14:37.391 "uuid": "79455d77-5c02-496c-aa7d-0ececa4750f1", 00:14:37.391 "assigned_rate_limits": { 00:14:37.391 "rw_ios_per_sec": 0, 00:14:37.391 "rw_mbytes_per_sec": 0, 00:14:37.391 "r_mbytes_per_sec": 0, 00:14:37.391 "w_mbytes_per_sec": 0 00:14:37.391 }, 00:14:37.391 "claimed": false, 00:14:37.391 "zoned": false, 00:14:37.391 "supported_io_types": { 00:14:37.391 "read": true, 00:14:37.391 "write": true, 00:14:37.391 "unmap": true, 00:14:37.391 "flush": true, 00:14:37.391 "reset": true, 00:14:37.391 "nvme_admin": true, 00:14:37.391 "nvme_io": true, 00:14:37.391 "nvme_io_md": false, 00:14:37.391 "write_zeroes": true, 00:14:37.391 "zcopy": false, 00:14:37.391 "get_zone_info": false, 00:14:37.391 "zone_management": false, 00:14:37.391 "zone_append": false, 00:14:37.391 "compare": true, 00:14:37.391 "compare_and_write": true, 00:14:37.391 "abort": true, 00:14:37.391 "seek_hole": false, 00:14:37.391 "seek_data": false, 00:14:37.391 "copy": true, 00:14:37.391 "nvme_iov_md": false 00:14:37.391 }, 00:14:37.391 "memory_domains": [ 00:14:37.391 { 00:14:37.391 "dma_device_id": "system", 00:14:37.391 "dma_device_type": 1 00:14:37.391 } 00:14:37.391 ], 00:14:37.391 "driver_specific": { 00:14:37.391 "nvme": [ 00:14:37.391 { 00:14:37.391 "trid": { 00:14:37.391 "trtype": "TCP", 00:14:37.391 "adrfam": "IPv4", 00:14:37.391 "traddr": "10.0.0.2", 00:14:37.391 "trsvcid": "4420", 00:14:37.391 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:37.391 }, 00:14:37.391 "ctrlr_data": { 00:14:37.391 "cntlid": 1, 00:14:37.391 "vendor_id": "0x8086", 00:14:37.391 "model_number": "SPDK bdev Controller", 00:14:37.391 "serial_number": "SPDK0", 00:14:37.391 "firmware_revision": "24.09", 00:14:37.391 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:37.391 "oacs": { 00:14:37.391 "security": 0, 00:14:37.391 "format": 0, 00:14:37.391 "firmware": 0, 00:14:37.391 "ns_manage": 0 00:14:37.391 }, 00:14:37.391 "multi_ctrlr": true, 00:14:37.391 "ana_reporting": false 00:14:37.391 }, 00:14:37.391 "vs": { 00:14:37.391 "nvme_version": "1.3" 00:14:37.391 }, 00:14:37.391 "ns_data": { 00:14:37.391 "id": 1, 00:14:37.391 "can_share": true 00:14:37.391 } 00:14:37.391 } 00:14:37.391 ], 00:14:37.391 "mp_policy": "active_passive" 00:14:37.391 } 00:14:37.391 } 00:14:37.391 ] 00:14:37.391 10:52:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2035981 00:14:37.391 10:52:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:37.391 10:52:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:37.653 Running I/O for 10 seconds... 00:14:38.596 Latency(us) 00:14:38.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.596 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:38.596 Nvme0n1 : 1.00 23240.00 90.78 0.00 0.00 0.00 0.00 0.00 00:14:38.596 =================================================================================================================== 00:14:38.596 Total : 23240.00 90.78 0.00 0.00 0.00 0.00 0.00 00:14:38.596 00:14:39.538 10:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f9909070-439a-42d5-a21b-0b9464c39793 00:14:39.538 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.538 Nvme0n1 : 2.00 24484.00 95.64 0.00 0.00 0.00 0.00 0.00 00:14:39.538 =================================================================================================================== 00:14:39.538 Total : 24484.00 95.64 0.00 0.00 0.00 0.00 0.00 00:14:39.538 00:14:39.538 true 00:14:39.799 10:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9909070-439a-42d5-a21b-0b9464c39793 00:14:39.799 10:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:39.799 10:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:39.799 10:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:39.799 10:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2035981 00:14:40.740 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.740 Nvme0n1 : 3.00 24910.33 97.31 0.00 0.00 0.00 0.00 0.00 00:14:40.740 =================================================================================================================== 00:14:40.740 Total : 24910.33 97.31 0.00 0.00 0.00 0.00 0.00 00:14:40.740 00:14:41.683 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.683 Nvme0n1 : 4.00 25138.00 98.20 0.00 0.00 0.00 0.00 0.00 00:14:41.683 =================================================================================================================== 00:14:41.683 Total : 25138.00 98.20 0.00 0.00 0.00 0.00 0.00 00:14:41.683 00:14:42.625 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:42.625 Nvme0n1 : 5.00 25285.00 98.77 0.00 0.00 0.00 0.00 0.00 00:14:42.625 =================================================================================================================== 00:14:42.625 Total : 25285.00 98.77 0.00 0.00 0.00 0.00 0.00 00:14:42.625 00:14:43.568 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:43.568 Nvme0n1 : 6.00 25358.83 99.06 0.00 0.00 0.00 0.00 0.00 00:14:43.568 =================================================================================================================== 00:14:43.568 Total : 25358.83 99.06 0.00 0.00 0.00 0.00 0.00 00:14:43.568 00:14:44.512 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:44.512 Nvme0n1 : 7.00 25427.29 99.33 0.00 0.00 0.00 0.00 0.00 00:14:44.512 =================================================================================================================== 00:14:44.512 Total : 25427.29 99.33 0.00 0.00 0.00 0.00 0.00 00:14:44.512 00:14:45.898 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:45.898 Nvme0n1 : 8.00 25491.12 99.57 0.00 0.00 0.00 0.00 0.00 00:14:45.898 =================================================================================================================== 00:14:45.898 Total : 25491.12 99.57 0.00 0.00 0.00 0.00 0.00 00:14:45.898 00:14:46.849 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:46.849 Nvme0n1 : 9.00 25536.67 99.75 0.00 0.00 0.00 0.00 0.00 00:14:46.849 =================================================================================================================== 00:14:46.849 Total : 25536.67 99.75 0.00 0.00 0.00 0.00 0.00 00:14:46.849 00:14:47.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:47.790 Nvme0n1 : 10.00 25575.00 99.90 0.00 0.00 0.00 0.00 0.00 00:14:47.790 =================================================================================================================== 00:14:47.790 Total : 25575.00 99.90 0.00 0.00 0.00 0.00 0.00 00:14:47.790 00:14:47.790 00:14:47.790 Latency(us) 00:14:47.790 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:47.790 Nvme0n1 : 10.00 25575.74 99.91 0.00 0.00 5001.15 2512.21 17585.49 00:14:47.790 =================================================================================================================== 00:14:47.790 Total : 25575.74 99.91 0.00 0.00 5001.15 2512.21 17585.49 00:14:47.790 0 00:14:47.790 10:53:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2035751 00:14:47.790 10:53:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 2035751 ']' 00:14:47.790 10:53:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 2035751 00:14:47.790 10:53:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:14:47.790 10:53:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:47.790 10:53:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2035751 00:14:47.790 10:53:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:47.790 10:53:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:47.790 10:53:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2035751' 00:14:47.790 killing process with pid 2035751 00:14:47.790 10:53:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 2035751 00:14:47.790 Received shutdown signal, test time was about 10.000000 seconds 00:14:47.790 00:14:47.790 Latency(us) 00:14:47.790 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.790 =================================================================================================================== 00:14:47.790 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:47.790 10:53:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 2035751 00:14:47.790 10:53:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:48.050 10:53:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:48.050 10:53:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9909070-439a-42d5-a21b-0b9464c39793 00:14:48.050 10:53:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:48.310 10:53:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:48.310 10:53:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:14:48.310 10:53:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:48.571 [2024-07-12 10:53:05.307607] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:48.571 10:53:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9909070-439a-42d5-a21b-0b9464c39793 00:14:48.571 10:53:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:14:48.571 10:53:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9909070-439a-42d5-a21b-0b9464c39793 00:14:48.571 10:53:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:48.571 10:53:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:48.571 10:53:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:48.571 10:53:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:48.571 10:53:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:48.571 10:53:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:48.571 10:53:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:48.571 10:53:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:48.571 10:53:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9909070-439a-42d5-a21b-0b9464c39793 00:14:48.571 request: 00:14:48.571 { 00:14:48.571 "uuid": "f9909070-439a-42d5-a21b-0b9464c39793", 00:14:48.571 "method": "bdev_lvol_get_lvstores", 00:14:48.571 "req_id": 1 00:14:48.571 } 00:14:48.571 Got JSON-RPC error response 00:14:48.571 response: 00:14:48.571 { 00:14:48.571 "code": -19, 00:14:48.571 "message": "No such device" 00:14:48.571 } 00:14:48.571 10:53:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:14:48.571 10:53:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:48.571 10:53:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:48.571 10:53:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:48.571 10:53:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:48.831 aio_bdev 00:14:48.831 10:53:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 79455d77-5c02-496c-aa7d-0ececa4750f1 00:14:48.831 10:53:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=79455d77-5c02-496c-aa7d-0ececa4750f1 00:14:48.831 10:53:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:48.831 10:53:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:14:48.831 10:53:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:48.831 10:53:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:48.831 10:53:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:49.091 10:53:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 79455d77-5c02-496c-aa7d-0ececa4750f1 -t 2000 00:14:49.091 [ 00:14:49.091 { 00:14:49.091 "name": "79455d77-5c02-496c-aa7d-0ececa4750f1", 00:14:49.091 "aliases": [ 00:14:49.091 "lvs/lvol" 00:14:49.091 ], 00:14:49.091 "product_name": "Logical Volume", 00:14:49.091 "block_size": 4096, 00:14:49.091 "num_blocks": 38912, 00:14:49.091 "uuid": "79455d77-5c02-496c-aa7d-0ececa4750f1", 00:14:49.091 "assigned_rate_limits": { 00:14:49.091 "rw_ios_per_sec": 0, 00:14:49.091 "rw_mbytes_per_sec": 0, 00:14:49.091 "r_mbytes_per_sec": 0, 00:14:49.091 "w_mbytes_per_sec": 0 00:14:49.091 }, 00:14:49.091 "claimed": false, 00:14:49.091 "zoned": false, 00:14:49.091 "supported_io_types": { 00:14:49.091 "read": true, 00:14:49.091 "write": true, 00:14:49.091 "unmap": true, 00:14:49.091 "flush": false, 00:14:49.091 "reset": true, 00:14:49.091 "nvme_admin": false, 00:14:49.092 "nvme_io": false, 00:14:49.092 "nvme_io_md": false, 00:14:49.092 "write_zeroes": true, 00:14:49.092 "zcopy": false, 00:14:49.092 "get_zone_info": false, 00:14:49.092 "zone_management": false, 00:14:49.092 "zone_append": false, 00:14:49.092 "compare": false, 00:14:49.092 "compare_and_write": false, 00:14:49.092 "abort": false, 00:14:49.092 "seek_hole": true, 00:14:49.092 "seek_data": true, 00:14:49.092 "copy": false, 00:14:49.092 "nvme_iov_md": false 00:14:49.092 }, 00:14:49.092 "driver_specific": { 00:14:49.092 "lvol": { 00:14:49.092 "lvol_store_uuid": "f9909070-439a-42d5-a21b-0b9464c39793", 00:14:49.092 "base_bdev": "aio_bdev", 00:14:49.092 "thin_provision": false, 00:14:49.092 "num_allocated_clusters": 38, 00:14:49.092 "snapshot": false, 00:14:49.092 "clone": false, 00:14:49.092 "esnap_clone": false 00:14:49.092 } 00:14:49.092 } 00:14:49.092 } 00:14:49.092 ] 00:14:49.092 10:53:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:14:49.092 10:53:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9909070-439a-42d5-a21b-0b9464c39793 00:14:49.092 10:53:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:49.352 10:53:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:49.352 10:53:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9909070-439a-42d5-a21b-0b9464c39793 00:14:49.352 10:53:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:49.352 10:53:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:49.352 10:53:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 79455d77-5c02-496c-aa7d-0ececa4750f1 00:14:49.614 10:53:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f9909070-439a-42d5-a21b-0b9464c39793 00:14:49.875 10:53:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:49.875 10:53:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:49.875 00:14:49.875 real 0m15.625s 00:14:49.875 user 0m15.301s 00:14:49.875 sys 0m1.332s 00:14:49.875 10:53:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:49.875 10:53:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:49.875 ************************************ 00:14:49.875 END TEST lvs_grow_clean 00:14:49.875 ************************************ 00:14:50.136 10:53:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:50.136 10:53:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:50.136 10:53:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:50.136 10:53:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:50.136 10:53:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:50.136 ************************************ 00:14:50.136 START TEST lvs_grow_dirty 00:14:50.136 ************************************ 00:14:50.136 10:53:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:14:50.136 10:53:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:50.136 10:53:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:50.136 10:53:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:50.136 10:53:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:50.136 10:53:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:50.136 10:53:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:50.136 10:53:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:50.136 10:53:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:50.136 10:53:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:50.397 10:53:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:50.397 10:53:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:50.397 10:53:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=8f05674d-66b0-41b2-969b-0a37f6556122 00:14:50.397 10:53:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f05674d-66b0-41b2-969b-0a37f6556122 00:14:50.397 10:53:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:50.658 10:53:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:50.658 10:53:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:50.658 10:53:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8f05674d-66b0-41b2-969b-0a37f6556122 lvol 150 00:14:50.658 10:53:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=eb617662-96dd-4f39-a2e0-749fe5e26ca7 00:14:50.658 10:53:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:50.658 10:53:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:50.919 [2024-07-12 10:53:07.729703] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:50.919 [2024-07-12 10:53:07.729746] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:50.919 true 00:14:50.919 10:53:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f05674d-66b0-41b2-969b-0a37f6556122 00:14:50.919 10:53:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:50.919 10:53:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:50.919 10:53:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:51.180 10:53:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 eb617662-96dd-4f39-a2e0-749fe5e26ca7 00:14:51.440 10:53:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:51.440 [2024-07-12 10:53:08.335451] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:51.440 10:53:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:51.701 10:53:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:51.701 10:53:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2038828 00:14:51.701 10:53:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:51.701 10:53:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2038828 /var/tmp/bdevperf.sock 00:14:51.701 10:53:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2038828 ']' 00:14:51.701 10:53:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:51.701 10:53:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:51.701 10:53:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:51.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:51.701 10:53:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:51.701 10:53:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:51.701 [2024-07-12 10:53:08.532073] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:51.701 [2024-07-12 10:53:08.532130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2038828 ] 00:14:51.701 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.701 [2024-07-12 10:53:08.604250] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.701 [2024-07-12 10:53:08.657871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.643 10:53:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:52.643 10:53:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:52.643 10:53:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:52.904 Nvme0n1 00:14:52.904 10:53:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:52.904 [ 00:14:52.904 { 00:14:52.904 "name": "Nvme0n1", 00:14:52.904 "aliases": [ 00:14:52.904 "eb617662-96dd-4f39-a2e0-749fe5e26ca7" 00:14:52.904 ], 00:14:52.904 "product_name": "NVMe disk", 00:14:52.904 "block_size": 4096, 00:14:52.904 "num_blocks": 38912, 00:14:52.904 "uuid": "eb617662-96dd-4f39-a2e0-749fe5e26ca7", 00:14:52.904 "assigned_rate_limits": { 00:14:52.904 "rw_ios_per_sec": 0, 00:14:52.904 "rw_mbytes_per_sec": 0, 00:14:52.904 "r_mbytes_per_sec": 0, 00:14:52.904 "w_mbytes_per_sec": 0 00:14:52.904 }, 00:14:52.904 "claimed": false, 00:14:52.904 "zoned": false, 00:14:52.904 "supported_io_types": { 00:14:52.904 "read": true, 00:14:52.904 "write": true, 00:14:52.904 "unmap": true, 00:14:52.904 "flush": true, 00:14:52.904 "reset": true, 00:14:52.904 "nvme_admin": true, 00:14:52.904 "nvme_io": true, 00:14:52.904 "nvme_io_md": false, 00:14:52.904 "write_zeroes": true, 00:14:52.904 "zcopy": false, 00:14:52.904 "get_zone_info": false, 00:14:52.904 "zone_management": false, 00:14:52.904 "zone_append": false, 00:14:52.904 "compare": true, 00:14:52.904 "compare_and_write": true, 00:14:52.904 "abort": true, 00:14:52.904 "seek_hole": false, 00:14:52.904 "seek_data": false, 00:14:52.904 "copy": true, 00:14:52.904 "nvme_iov_md": false 00:14:52.904 }, 00:14:52.904 "memory_domains": [ 00:14:52.904 { 00:14:52.904 "dma_device_id": "system", 00:14:52.904 "dma_device_type": 1 00:14:52.904 } 00:14:52.904 ], 00:14:52.904 "driver_specific": { 00:14:52.904 "nvme": [ 00:14:52.904 { 00:14:52.904 "trid": { 00:14:52.904 "trtype": "TCP", 00:14:52.904 "adrfam": "IPv4", 00:14:52.904 "traddr": "10.0.0.2", 00:14:52.904 "trsvcid": "4420", 00:14:52.904 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:52.904 }, 00:14:52.904 "ctrlr_data": { 00:14:52.904 "cntlid": 1, 00:14:52.904 "vendor_id": "0x8086", 00:14:52.904 "model_number": "SPDK bdev Controller", 00:14:52.904 "serial_number": "SPDK0", 00:14:52.904 "firmware_revision": "24.09", 00:14:52.904 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:52.904 "oacs": { 00:14:52.904 "security": 0, 00:14:52.904 "format": 0, 00:14:52.904 "firmware": 0, 00:14:52.904 "ns_manage": 0 00:14:52.904 }, 00:14:52.904 "multi_ctrlr": true, 00:14:52.904 "ana_reporting": false 00:14:52.904 }, 00:14:52.904 "vs": { 00:14:52.904 "nvme_version": "1.3" 00:14:52.904 }, 00:14:52.904 "ns_data": { 00:14:52.904 "id": 1, 00:14:52.904 "can_share": true 00:14:52.904 } 00:14:52.904 } 00:14:52.904 ], 00:14:52.904 "mp_policy": "active_passive" 00:14:52.904 } 00:14:52.904 } 00:14:52.904 ] 00:14:52.904 10:53:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2039058 00:14:52.904 10:53:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:52.904 10:53:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:53.165 Running I/O for 10 seconds... 00:14:54.107 Latency(us) 00:14:54.107 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.107 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:54.107 Nvme0n1 : 1.00 24724.00 96.58 0.00 0.00 0.00 0.00 0.00 00:14:54.107 =================================================================================================================== 00:14:54.107 Total : 24724.00 96.58 0.00 0.00 0.00 0.00 0.00 00:14:54.107 00:14:55.049 10:53:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8f05674d-66b0-41b2-969b-0a37f6556122 00:14:55.049 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:55.049 Nvme0n1 : 2.00 24886.00 97.21 0.00 0.00 0.00 0.00 0.00 00:14:55.049 =================================================================================================================== 00:14:55.049 Total : 24886.00 97.21 0.00 0.00 0.00 0.00 0.00 00:14:55.049 00:14:55.049 true 00:14:55.049 10:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f05674d-66b0-41b2-969b-0a37f6556122 00:14:55.049 10:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:55.311 10:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:55.311 10:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:55.311 10:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2039058 00:14:56.254 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:56.254 Nvme0n1 : 3.00 24953.33 97.47 0.00 0.00 0.00 0.00 0.00 00:14:56.254 =================================================================================================================== 00:14:56.254 Total : 24953.33 97.47 0.00 0.00 0.00 0.00 0.00 00:14:56.254 00:14:57.193 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:57.193 Nvme0n1 : 4.00 25005.00 97.68 0.00 0.00 0.00 0.00 0.00 00:14:57.193 =================================================================================================================== 00:14:57.193 Total : 25005.00 97.68 0.00 0.00 0.00 0.00 0.00 00:14:57.193 00:14:58.210 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:58.210 Nvme0n1 : 5.00 25044.00 97.83 0.00 0.00 0.00 0.00 0.00 00:14:58.210 =================================================================================================================== 00:14:58.210 Total : 25044.00 97.83 0.00 0.00 0.00 0.00 0.00 00:14:58.211 00:14:59.155 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:59.155 Nvme0n1 : 6.00 25075.33 97.95 0.00 0.00 0.00 0.00 0.00 00:14:59.155 =================================================================================================================== 00:14:59.155 Total : 25075.33 97.95 0.00 0.00 0.00 0.00 0.00 00:14:59.155 00:15:00.097 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:00.097 Nvme0n1 : 7.00 25103.43 98.06 0.00 0.00 0.00 0.00 0.00 00:15:00.097 =================================================================================================================== 00:15:00.097 Total : 25103.43 98.06 0.00 0.00 0.00 0.00 0.00 00:15:00.097 00:15:01.040 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:01.040 Nvme0n1 : 8.00 25121.50 98.13 0.00 0.00 0.00 0.00 0.00 00:15:01.040 =================================================================================================================== 00:15:01.040 Total : 25121.50 98.13 0.00 0.00 0.00 0.00 0.00 00:15:01.040 00:15:01.984 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:01.984 Nvme0n1 : 9.00 25138.22 98.20 0.00 0.00 0.00 0.00 0.00 00:15:01.984 =================================================================================================================== 00:15:01.984 Total : 25138.22 98.20 0.00 0.00 0.00 0.00 0.00 00:15:01.984 00:15:03.371 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:03.371 Nvme0n1 : 10.00 25152.40 98.25 0.00 0.00 0.00 0.00 0.00 00:15:03.371 =================================================================================================================== 00:15:03.371 Total : 25152.40 98.25 0.00 0.00 0.00 0.00 0.00 00:15:03.371 00:15:03.371 00:15:03.371 Latency(us) 00:15:03.371 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.371 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:03.371 Nvme0n1 : 10.01 25152.46 98.25 0.00 0.00 5085.35 3768.32 9175.04 00:15:03.371 =================================================================================================================== 00:15:03.371 Total : 25152.46 98.25 0.00 0.00 5085.35 3768.32 9175.04 00:15:03.371 0 00:15:03.371 10:53:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2038828 00:15:03.371 10:53:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 2038828 ']' 00:15:03.371 10:53:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 2038828 00:15:03.371 10:53:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:15:03.371 10:53:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:03.371 10:53:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2038828 00:15:03.371 10:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:03.371 10:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:03.371 10:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2038828' 00:15:03.371 killing process with pid 2038828 00:15:03.371 10:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 2038828 00:15:03.371 Received shutdown signal, test time was about 10.000000 seconds 00:15:03.371 00:15:03.371 Latency(us) 00:15:03.371 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.371 =================================================================================================================== 00:15:03.371 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:03.371 10:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 2038828 00:15:03.371 10:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:03.371 10:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:03.632 10:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f05674d-66b0-41b2-969b-0a37f6556122 00:15:03.632 10:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:03.894 10:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:03.894 10:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:15:03.894 10:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2035252 00:15:03.894 10:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2035252 00:15:03.894 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2035252 Killed "${NVMF_APP[@]}" "$@" 00:15:03.894 10:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:15:03.894 10:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:15:03.894 10:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:03.894 10:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:03.894 10:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:03.894 10:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2041218 00:15:03.894 10:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2041218 00:15:03.894 10:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:03.894 10:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2041218 ']' 00:15:03.894 10:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.894 10:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:03.894 10:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.894 10:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:03.894 10:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:03.894 [2024-07-12 10:53:20.754026] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:03.894 [2024-07-12 10:53:20.754083] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:03.894 EAL: No free 2048 kB hugepages reported on node 1 00:15:03.894 [2024-07-12 10:53:20.836409] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.155 [2024-07-12 10:53:20.893693] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:04.155 [2024-07-12 10:53:20.893726] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:04.155 [2024-07-12 10:53:20.893731] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:04.155 [2024-07-12 10:53:20.893736] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:04.155 [2024-07-12 10:53:20.893739] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:04.155 [2024-07-12 10:53:20.893755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.724 10:53:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:04.724 10:53:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:15:04.724 10:53:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:04.724 10:53:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:04.724 10:53:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:04.724 10:53:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:04.724 10:53:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:04.724 [2024-07-12 10:53:21.690077] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:04.724 [2024-07-12 10:53:21.690156] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:04.724 [2024-07-12 10:53:21.690179] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:04.984 10:53:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:15:04.984 10:53:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev eb617662-96dd-4f39-a2e0-749fe5e26ca7 00:15:04.984 10:53:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=eb617662-96dd-4f39-a2e0-749fe5e26ca7 00:15:04.984 10:53:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:04.984 10:53:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:04.984 10:53:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:04.984 10:53:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:04.984 10:53:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:04.984 10:53:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b eb617662-96dd-4f39-a2e0-749fe5e26ca7 -t 2000 00:15:05.244 [ 00:15:05.244 { 00:15:05.244 "name": "eb617662-96dd-4f39-a2e0-749fe5e26ca7", 00:15:05.244 "aliases": [ 00:15:05.244 "lvs/lvol" 00:15:05.244 ], 00:15:05.244 "product_name": "Logical Volume", 00:15:05.244 "block_size": 4096, 00:15:05.244 "num_blocks": 38912, 00:15:05.244 "uuid": "eb617662-96dd-4f39-a2e0-749fe5e26ca7", 00:15:05.244 "assigned_rate_limits": { 00:15:05.244 "rw_ios_per_sec": 0, 00:15:05.244 "rw_mbytes_per_sec": 0, 00:15:05.244 "r_mbytes_per_sec": 0, 00:15:05.244 "w_mbytes_per_sec": 0 00:15:05.244 }, 00:15:05.244 "claimed": false, 00:15:05.244 "zoned": false, 00:15:05.244 "supported_io_types": { 00:15:05.244 "read": true, 00:15:05.244 "write": true, 00:15:05.244 "unmap": true, 00:15:05.244 "flush": false, 00:15:05.244 "reset": true, 00:15:05.244 "nvme_admin": false, 00:15:05.244 "nvme_io": false, 00:15:05.244 "nvme_io_md": false, 00:15:05.244 "write_zeroes": true, 00:15:05.244 "zcopy": false, 00:15:05.244 "get_zone_info": false, 00:15:05.244 "zone_management": false, 00:15:05.244 "zone_append": false, 00:15:05.244 "compare": false, 00:15:05.244 "compare_and_write": false, 00:15:05.244 "abort": false, 00:15:05.244 "seek_hole": true, 00:15:05.244 "seek_data": true, 00:15:05.244 "copy": false, 00:15:05.244 "nvme_iov_md": false 00:15:05.244 }, 00:15:05.244 "driver_specific": { 00:15:05.244 "lvol": { 00:15:05.244 "lvol_store_uuid": "8f05674d-66b0-41b2-969b-0a37f6556122", 00:15:05.244 "base_bdev": "aio_bdev", 00:15:05.244 "thin_provision": false, 00:15:05.244 "num_allocated_clusters": 38, 00:15:05.244 "snapshot": false, 00:15:05.244 "clone": false, 00:15:05.244 "esnap_clone": false 00:15:05.244 } 00:15:05.244 } 00:15:05.244 } 00:15:05.244 ] 00:15:05.244 10:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:05.244 10:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f05674d-66b0-41b2-969b-0a37f6556122 00:15:05.244 10:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:15:05.244 10:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:15:05.244 10:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f05674d-66b0-41b2-969b-0a37f6556122 00:15:05.244 10:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:15:05.505 10:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:15:05.505 10:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:05.505 [2024-07-12 10:53:22.482522] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:05.765 10:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f05674d-66b0-41b2-969b-0a37f6556122 00:15:05.765 10:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:15:05.765 10:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f05674d-66b0-41b2-969b-0a37f6556122 00:15:05.765 10:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:05.765 10:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:05.765 10:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:05.765 10:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:05.765 10:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:05.765 10:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:05.765 10:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:05.765 10:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:05.765 10:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f05674d-66b0-41b2-969b-0a37f6556122 00:15:05.765 request: 00:15:05.765 { 00:15:05.765 "uuid": "8f05674d-66b0-41b2-969b-0a37f6556122", 00:15:05.765 "method": "bdev_lvol_get_lvstores", 00:15:05.765 "req_id": 1 00:15:05.765 } 00:15:05.765 Got JSON-RPC error response 00:15:05.765 response: 00:15:05.765 { 00:15:05.765 "code": -19, 00:15:05.765 "message": "No such device" 00:15:05.765 } 00:15:05.765 10:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:15:05.765 10:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:05.765 10:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:05.765 10:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:05.765 10:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:06.025 aio_bdev 00:15:06.025 10:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev eb617662-96dd-4f39-a2e0-749fe5e26ca7 00:15:06.025 10:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=eb617662-96dd-4f39-a2e0-749fe5e26ca7 00:15:06.025 10:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:06.025 10:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:06.025 10:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:06.025 10:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:06.025 10:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:06.025 10:53:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b eb617662-96dd-4f39-a2e0-749fe5e26ca7 -t 2000 00:15:06.285 [ 00:15:06.285 { 00:15:06.285 "name": "eb617662-96dd-4f39-a2e0-749fe5e26ca7", 00:15:06.285 "aliases": [ 00:15:06.285 "lvs/lvol" 00:15:06.285 ], 00:15:06.285 "product_name": "Logical Volume", 00:15:06.285 "block_size": 4096, 00:15:06.285 "num_blocks": 38912, 00:15:06.285 "uuid": "eb617662-96dd-4f39-a2e0-749fe5e26ca7", 00:15:06.285 "assigned_rate_limits": { 00:15:06.285 "rw_ios_per_sec": 0, 00:15:06.285 "rw_mbytes_per_sec": 0, 00:15:06.285 "r_mbytes_per_sec": 0, 00:15:06.285 "w_mbytes_per_sec": 0 00:15:06.285 }, 00:15:06.285 "claimed": false, 00:15:06.285 "zoned": false, 00:15:06.285 "supported_io_types": { 00:15:06.285 "read": true, 00:15:06.285 "write": true, 00:15:06.285 "unmap": true, 00:15:06.285 "flush": false, 00:15:06.285 "reset": true, 00:15:06.285 "nvme_admin": false, 00:15:06.285 "nvme_io": false, 00:15:06.285 "nvme_io_md": false, 00:15:06.285 "write_zeroes": true, 00:15:06.285 "zcopy": false, 00:15:06.285 "get_zone_info": false, 00:15:06.285 "zone_management": false, 00:15:06.285 "zone_append": false, 00:15:06.285 "compare": false, 00:15:06.285 "compare_and_write": false, 00:15:06.285 "abort": false, 00:15:06.285 "seek_hole": true, 00:15:06.285 "seek_data": true, 00:15:06.285 "copy": false, 00:15:06.285 "nvme_iov_md": false 00:15:06.285 }, 00:15:06.285 "driver_specific": { 00:15:06.285 "lvol": { 00:15:06.285 "lvol_store_uuid": "8f05674d-66b0-41b2-969b-0a37f6556122", 00:15:06.285 "base_bdev": "aio_bdev", 00:15:06.285 "thin_provision": false, 00:15:06.285 "num_allocated_clusters": 38, 00:15:06.285 "snapshot": false, 00:15:06.285 "clone": false, 00:15:06.285 "esnap_clone": false 00:15:06.285 } 00:15:06.285 } 00:15:06.285 } 00:15:06.285 ] 00:15:06.285 10:53:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:06.285 10:53:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f05674d-66b0-41b2-969b-0a37f6556122 00:15:06.285 10:53:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:06.545 10:53:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:06.545 10:53:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f05674d-66b0-41b2-969b-0a37f6556122 00:15:06.545 10:53:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:06.545 10:53:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:06.545 10:53:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete eb617662-96dd-4f39-a2e0-749fe5e26ca7 00:15:06.805 10:53:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8f05674d-66b0-41b2-969b-0a37f6556122 00:15:07.067 10:53:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:07.067 10:53:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:07.067 00:15:07.067 real 0m17.079s 00:15:07.067 user 0m44.568s 00:15:07.067 sys 0m3.253s 00:15:07.067 10:53:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:07.067 10:53:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:07.067 ************************************ 00:15:07.067 END TEST lvs_grow_dirty 00:15:07.067 ************************************ 00:15:07.067 10:53:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:15:07.067 10:53:24 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:07.067 10:53:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:15:07.067 10:53:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:15:07.067 10:53:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:07.067 10:53:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:07.327 10:53:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:07.327 10:53:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:07.327 10:53:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:07.327 10:53:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:07.327 nvmf_trace.0 00:15:07.327 10:53:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:15:07.327 10:53:24 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:07.327 10:53:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:07.327 10:53:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:15:07.327 10:53:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:07.327 10:53:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:15:07.327 10:53:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:07.327 10:53:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:07.327 rmmod nvme_tcp 00:15:07.327 rmmod nvme_fabrics 00:15:07.327 rmmod nvme_keyring 00:15:07.327 10:53:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:07.327 10:53:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:15:07.327 10:53:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:15:07.327 10:53:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2041218 ']' 00:15:07.327 10:53:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2041218 00:15:07.327 10:53:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 2041218 ']' 00:15:07.327 10:53:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 2041218 00:15:07.327 10:53:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:15:07.327 10:53:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:07.327 10:53:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2041218 00:15:07.327 10:53:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:07.327 10:53:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:07.327 10:53:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2041218' 00:15:07.327 killing process with pid 2041218 00:15:07.327 10:53:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 2041218 00:15:07.328 10:53:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 2041218 00:15:07.587 10:53:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:07.587 10:53:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:07.587 10:53:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:07.587 10:53:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:07.587 10:53:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:07.587 10:53:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.587 10:53:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:07.587 10:53:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.497 10:53:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:09.497 00:15:09.497 real 0m43.826s 00:15:09.497 user 1m5.924s 00:15:09.497 sys 0m10.531s 00:15:09.497 10:53:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:09.497 10:53:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:09.497 ************************************ 00:15:09.497 END TEST nvmf_lvs_grow 00:15:09.497 ************************************ 00:15:09.497 10:53:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:09.497 10:53:26 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:09.497 10:53:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:09.497 10:53:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:09.497 10:53:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:09.756 ************************************ 00:15:09.756 START TEST nvmf_bdev_io_wait 00:15:09.756 ************************************ 00:15:09.756 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:09.756 * Looking for test storage... 00:15:09.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:09.756 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:09.756 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:15:09.756 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:09.756 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:09.756 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:09.756 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:09.756 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:09.756 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:09.756 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:09.756 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:09.756 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:09.756 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:15:09.757 10:53:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:17.897 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:17.897 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:17.897 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:17.897 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:17.897 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:17.897 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:15:17.897 00:15:17.897 --- 10.0.0.2 ping statistics --- 00:15:17.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.897 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:17.897 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:17.897 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.365 ms 00:15:17.897 00:15:17.897 --- 10.0.0.1 ping statistics --- 00:15:17.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.897 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2046137 00:15:17.897 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2046137 00:15:17.898 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:17.898 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 2046137 ']' 00:15:17.898 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.898 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:17.898 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.898 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:17.898 10:53:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:17.898 [2024-07-12 10:53:34.008703] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:17.898 [2024-07-12 10:53:34.008765] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.898 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.898 [2024-07-12 10:53:34.099244] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:17.898 [2024-07-12 10:53:34.196773] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.898 [2024-07-12 10:53:34.196835] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.898 [2024-07-12 10:53:34.196843] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:17.898 [2024-07-12 10:53:34.196849] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:17.898 [2024-07-12 10:53:34.196856] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.898 [2024-07-12 10:53:34.197022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.898 [2024-07-12 10:53:34.197179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:17.898 [2024-07-12 10:53:34.197403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:17.898 [2024-07-12 10:53:34.197404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.898 10:53:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:17.898 10:53:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:15:17.898 10:53:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:17.898 10:53:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:17.898 10:53:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:17.898 10:53:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:17.898 10:53:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:17.898 10:53:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.898 10:53:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:17.898 10:53:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.898 10:53:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:17.898 10:53:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.898 10:53:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:18.160 10:53:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.160 10:53:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:18.160 10:53:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.160 10:53:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:18.160 [2024-07-12 10:53:34.928569] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:18.160 10:53:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.160 10:53:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:18.160 10:53:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.160 10:53:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:18.160 Malloc0 00:15:18.160 10:53:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.160 10:53:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:18.160 10:53:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.160 10:53:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:18.160 10:53:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.160 10:53:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:18.160 10:53:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.160 10:53:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:18.160 10:53:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.160 10:53:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:18.160 10:53:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.160 10:53:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:18.160 [2024-07-12 10:53:35.001604] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:18.160 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.160 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2046377 00:15:18.160 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:18.160 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2046380 00:15:18.160 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:18.160 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:18.160 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:18.160 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:18.160 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:18.160 { 00:15:18.160 "params": { 00:15:18.160 "name": "Nvme$subsystem", 00:15:18.160 "trtype": "$TEST_TRANSPORT", 00:15:18.160 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:18.160 "adrfam": "ipv4", 00:15:18.160 "trsvcid": "$NVMF_PORT", 00:15:18.160 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:18.160 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:18.160 "hdgst": ${hdgst:-false}, 00:15:18.160 "ddgst": ${ddgst:-false} 00:15:18.160 }, 00:15:18.160 "method": "bdev_nvme_attach_controller" 00:15:18.160 } 00:15:18.160 EOF 00:15:18.160 )") 00:15:18.160 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2046382 00:15:18.160 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:18.160 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:18.160 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:18.160 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:18.160 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:18.160 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:18.160 { 00:15:18.160 "params": { 00:15:18.160 "name": "Nvme$subsystem", 00:15:18.160 "trtype": "$TEST_TRANSPORT", 00:15:18.160 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:18.160 "adrfam": "ipv4", 00:15:18.160 "trsvcid": "$NVMF_PORT", 00:15:18.160 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:18.160 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:18.160 "hdgst": ${hdgst:-false}, 00:15:18.160 "ddgst": ${ddgst:-false} 00:15:18.160 }, 00:15:18.160 "method": "bdev_nvme_attach_controller" 00:15:18.160 } 00:15:18.161 EOF 00:15:18.161 )") 00:15:18.161 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2046386 00:15:18.161 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:18.161 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:18.161 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:15:18.161 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:18.161 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:18.161 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:18.161 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:18.161 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:18.161 { 00:15:18.161 "params": { 00:15:18.161 "name": "Nvme$subsystem", 00:15:18.161 "trtype": "$TEST_TRANSPORT", 00:15:18.161 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:18.161 "adrfam": "ipv4", 00:15:18.161 "trsvcid": "$NVMF_PORT", 00:15:18.161 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:18.161 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:18.161 "hdgst": ${hdgst:-false}, 00:15:18.161 "ddgst": ${ddgst:-false} 00:15:18.161 }, 00:15:18.161 "method": "bdev_nvme_attach_controller" 00:15:18.161 } 00:15:18.161 EOF 00:15:18.161 )") 00:15:18.161 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:18.161 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:18.161 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:18.161 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:18.161 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:18.161 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:18.161 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:18.161 { 00:15:18.161 "params": { 00:15:18.161 "name": "Nvme$subsystem", 00:15:18.161 "trtype": "$TEST_TRANSPORT", 00:15:18.161 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:18.161 "adrfam": "ipv4", 00:15:18.161 "trsvcid": "$NVMF_PORT", 00:15:18.161 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:18.161 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:18.161 "hdgst": ${hdgst:-false}, 00:15:18.161 "ddgst": ${ddgst:-false} 00:15:18.161 }, 00:15:18.161 "method": "bdev_nvme_attach_controller" 00:15:18.161 } 00:15:18.161 EOF 00:15:18.161 )") 00:15:18.161 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:18.161 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2046377 00:15:18.161 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:18.161 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:18.161 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:18.161 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:18.161 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:18.161 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:18.161 "params": { 00:15:18.161 "name": "Nvme1", 00:15:18.161 "trtype": "tcp", 00:15:18.161 "traddr": "10.0.0.2", 00:15:18.161 "adrfam": "ipv4", 00:15:18.161 "trsvcid": "4420", 00:15:18.161 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:18.161 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:18.161 "hdgst": false, 00:15:18.161 "ddgst": false 00:15:18.161 }, 00:15:18.161 "method": "bdev_nvme_attach_controller" 00:15:18.161 }' 00:15:18.161 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:18.161 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:18.161 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:18.161 "params": { 00:15:18.161 "name": "Nvme1", 00:15:18.161 "trtype": "tcp", 00:15:18.161 "traddr": "10.0.0.2", 00:15:18.161 "adrfam": "ipv4", 00:15:18.161 "trsvcid": "4420", 00:15:18.161 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:18.161 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:18.161 "hdgst": false, 00:15:18.161 "ddgst": false 00:15:18.161 }, 00:15:18.161 "method": "bdev_nvme_attach_controller" 00:15:18.161 }' 00:15:18.161 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:18.161 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:18.161 "params": { 00:15:18.161 "name": "Nvme1", 00:15:18.161 "trtype": "tcp", 00:15:18.161 "traddr": "10.0.0.2", 00:15:18.161 "adrfam": "ipv4", 00:15:18.161 "trsvcid": "4420", 00:15:18.161 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:18.161 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:18.161 "hdgst": false, 00:15:18.161 "ddgst": false 00:15:18.161 }, 00:15:18.161 "method": "bdev_nvme_attach_controller" 00:15:18.161 }' 00:15:18.161 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:18.161 10:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:18.161 "params": { 00:15:18.161 "name": "Nvme1", 00:15:18.161 "trtype": "tcp", 00:15:18.161 "traddr": "10.0.0.2", 00:15:18.161 "adrfam": "ipv4", 00:15:18.161 "trsvcid": "4420", 00:15:18.161 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:18.161 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:18.161 "hdgst": false, 00:15:18.161 "ddgst": false 00:15:18.161 }, 00:15:18.161 "method": "bdev_nvme_attach_controller" 00:15:18.161 }' 00:15:18.161 [2024-07-12 10:53:35.058016] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:18.161 [2024-07-12 10:53:35.058087] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:18.161 [2024-07-12 10:53:35.060000] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:18.161 [2024-07-12 10:53:35.060067] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:18.161 [2024-07-12 10:53:35.063674] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:18.161 [2024-07-12 10:53:35.063744] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:18.161 [2024-07-12 10:53:35.066940] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:18.161 [2024-07-12 10:53:35.067039] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:18.161 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.423 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.423 [2024-07-12 10:53:35.265764] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.423 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.423 [2024-07-12 10:53:35.337277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:18.423 [2024-07-12 10:53:35.360211] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.423 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.683 [2024-07-12 10:53:35.430816] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.683 [2024-07-12 10:53:35.432965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:18.683 [2024-07-12 10:53:35.496911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:15:18.683 [2024-07-12 10:53:35.504454] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.683 [2024-07-12 10:53:35.570160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:18.945 Running I/O for 1 seconds... 00:15:18.945 Running I/O for 1 seconds... 00:15:18.945 Running I/O for 1 seconds... 00:15:18.945 Running I/O for 1 seconds... 00:15:19.890 00:15:19.890 Latency(us) 00:15:19.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.890 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:19.890 Nvme1n1 : 1.01 12486.66 48.78 0.00 0.00 10219.75 5324.80 15073.28 00:15:19.890 =================================================================================================================== 00:15:19.890 Total : 12486.66 48.78 0.00 0.00 10219.75 5324.80 15073.28 00:15:19.890 00:15:19.890 Latency(us) 00:15:19.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.890 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:19.890 Nvme1n1 : 1.01 9683.59 37.83 0.00 0.00 13158.47 8246.61 22719.15 00:15:19.890 =================================================================================================================== 00:15:19.890 Total : 9683.59 37.83 0.00 0.00 13158.47 8246.61 22719.15 00:15:19.890 00:15:19.890 Latency(us) 00:15:19.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.890 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:19.890 Nvme1n1 : 1.00 187729.86 733.32 0.00 0.00 678.87 274.77 826.03 00:15:19.890 =================================================================================================================== 00:15:19.890 Total : 187729.86 733.32 0.00 0.00 678.87 274.77 826.03 00:15:19.890 00:15:19.890 Latency(us) 00:15:19.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.890 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:19.890 Nvme1n1 : 1.01 9629.01 37.61 0.00 0.00 13250.96 5461.33 28398.93 00:15:19.890 =================================================================================================================== 00:15:19.890 Total : 9629.01 37.61 0.00 0.00 13250.96 5461.33 28398.93 00:15:20.151 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2046380 00:15:20.151 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2046382 00:15:20.151 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2046386 00:15:20.151 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:20.151 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.151 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:20.151 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.151 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:20.151 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:20.151 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:20.151 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:15:20.151 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:20.151 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:15:20.151 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:20.151 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:20.151 rmmod nvme_tcp 00:15:20.151 rmmod nvme_fabrics 00:15:20.151 rmmod nvme_keyring 00:15:20.151 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:20.151 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:15:20.151 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:15:20.151 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2046137 ']' 00:15:20.151 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2046137 00:15:20.151 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 2046137 ']' 00:15:20.151 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 2046137 00:15:20.151 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:15:20.151 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:20.151 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2046137 00:15:20.412 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:20.412 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:20.412 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2046137' 00:15:20.412 killing process with pid 2046137 00:15:20.412 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 2046137 00:15:20.412 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 2046137 00:15:20.412 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:20.412 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:20.412 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:20.412 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:20.412 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:20.412 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.412 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:20.412 10:53:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:22.962 10:53:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:22.962 00:15:22.962 real 0m12.868s 00:15:22.962 user 0m20.014s 00:15:22.962 sys 0m7.239s 00:15:22.962 10:53:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:22.962 10:53:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:22.962 ************************************ 00:15:22.962 END TEST nvmf_bdev_io_wait 00:15:22.962 ************************************ 00:15:22.962 10:53:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:22.962 10:53:39 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:22.962 10:53:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:22.962 10:53:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:22.962 10:53:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:22.962 ************************************ 00:15:22.962 START TEST nvmf_queue_depth 00:15:22.962 ************************************ 00:15:22.962 10:53:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:22.962 * Looking for test storage... 00:15:22.962 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:22.962 10:53:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:22.962 10:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:15:22.962 10:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:22.962 10:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:22.962 10:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:22.962 10:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:22.962 10:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:22.962 10:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:22.962 10:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:22.962 10:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:22.962 10:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:22.962 10:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:22.962 10:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:22.962 10:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:22.962 10:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:22.962 10:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:22.962 10:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:22.962 10:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:22.962 10:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:22.962 10:53:39 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:22.962 10:53:39 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:22.962 10:53:39 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:22.962 10:53:39 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.962 10:53:39 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.962 10:53:39 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.962 10:53:39 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:15:22.962 10:53:39 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.962 10:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:15:22.962 10:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:22.963 10:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:22.963 10:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:22.963 10:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:22.963 10:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:22.963 10:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:22.963 10:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:22.963 10:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:22.963 10:53:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:22.963 10:53:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:22.963 10:53:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:22.963 10:53:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:22.963 10:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:22.963 10:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:22.963 10:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:22.963 10:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:22.963 10:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:22.963 10:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:22.963 10:53:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:22.963 10:53:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:22.963 10:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:22.963 10:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:22.963 10:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:15:22.963 10:53:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:29.556 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:29.556 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:29.556 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:29.556 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:29.556 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:29.818 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:29.818 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:29.818 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:29.818 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:29.818 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:29.818 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:29.818 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:30.080 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:30.080 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:30.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:30.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.503 ms 00:15:30.080 00:15:30.080 --- 10.0.0.2 ping statistics --- 00:15:30.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.080 rtt min/avg/max/mdev = 0.503/0.503/0.503/0.000 ms 00:15:30.080 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:30.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:30.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.368 ms 00:15:30.080 00:15:30.080 --- 10.0.0.1 ping statistics --- 00:15:30.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.080 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:15:30.080 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:30.080 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:15:30.080 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:30.080 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:30.080 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:30.080 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:30.080 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:30.080 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:30.080 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:30.080 10:53:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:30.080 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:30.080 10:53:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:30.080 10:53:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:30.080 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2050850 00:15:30.080 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2050850 00:15:30.080 10:53:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:30.080 10:53:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2050850 ']' 00:15:30.080 10:53:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.080 10:53:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:30.080 10:53:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.080 10:53:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:30.080 10:53:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:30.080 [2024-07-12 10:53:46.946504] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:30.080 [2024-07-12 10:53:46.946570] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.080 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.080 [2024-07-12 10:53:47.021421] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.342 [2024-07-12 10:53:47.115040] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:30.342 [2024-07-12 10:53:47.115101] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:30.342 [2024-07-12 10:53:47.115110] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:30.342 [2024-07-12 10:53:47.115118] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:30.342 [2024-07-12 10:53:47.115135] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:30.342 [2024-07-12 10:53:47.115163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.914 10:53:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:30.914 10:53:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:30.914 10:53:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:30.914 10:53:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:30.914 10:53:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:30.914 10:53:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.914 10:53:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:30.914 10:53:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.914 10:53:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:30.914 [2024-07-12 10:53:47.794283] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:30.914 10:53:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.914 10:53:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:30.914 10:53:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.914 10:53:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:30.914 Malloc0 00:15:30.914 10:53:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.914 10:53:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:30.914 10:53:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.914 10:53:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:30.914 10:53:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.915 10:53:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:30.915 10:53:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.915 10:53:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:30.915 10:53:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.915 10:53:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:30.915 10:53:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.915 10:53:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:30.915 [2024-07-12 10:53:47.863905] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:30.915 10:53:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.915 10:53:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2051194 00:15:30.915 10:53:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:30.915 10:53:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:30.915 10:53:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2051194 /var/tmp/bdevperf.sock 00:15:30.915 10:53:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2051194 ']' 00:15:30.915 10:53:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:30.915 10:53:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:30.915 10:53:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:30.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:30.915 10:53:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:30.915 10:53:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:31.176 [2024-07-12 10:53:47.921632] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:31.176 [2024-07-12 10:53:47.921695] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2051194 ] 00:15:31.176 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.176 [2024-07-12 10:53:48.003413] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.176 [2024-07-12 10:53:48.099005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.749 10:53:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:31.749 10:53:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:31.749 10:53:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:31.749 10:53:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.749 10:53:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:32.011 NVMe0n1 00:15:32.011 10:53:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.011 10:53:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:32.273 Running I/O for 10 seconds... 00:15:42.325 00:15:42.325 Latency(us) 00:15:42.325 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.325 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:42.325 Verification LBA range: start 0x0 length 0x4000 00:15:42.325 NVMe0n1 : 10.04 12655.14 49.43 0.00 0.00 80650.12 5379.41 81701.55 00:15:42.325 =================================================================================================================== 00:15:42.325 Total : 12655.14 49.43 0.00 0.00 80650.12 5379.41 81701.55 00:15:42.325 0 00:15:42.325 10:53:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2051194 00:15:42.325 10:53:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2051194 ']' 00:15:42.325 10:53:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2051194 00:15:42.325 10:53:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:15:42.325 10:53:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:42.325 10:53:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2051194 00:15:42.325 10:53:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:42.325 10:53:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:42.325 10:53:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2051194' 00:15:42.325 killing process with pid 2051194 00:15:42.325 10:53:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2051194 00:15:42.325 Received shutdown signal, test time was about 10.000000 seconds 00:15:42.325 00:15:42.325 Latency(us) 00:15:42.325 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.325 =================================================================================================================== 00:15:42.325 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:42.325 10:53:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2051194 00:15:42.325 10:53:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:42.325 10:53:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:42.325 10:53:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:42.325 10:53:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:15:42.325 10:53:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:42.325 10:53:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:15:42.325 10:53:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:42.325 10:53:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:42.325 rmmod nvme_tcp 00:15:42.325 rmmod nvme_fabrics 00:15:42.585 rmmod nvme_keyring 00:15:42.585 10:53:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:42.585 10:53:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:15:42.585 10:53:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:15:42.585 10:53:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2050850 ']' 00:15:42.585 10:53:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2050850 00:15:42.585 10:53:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2050850 ']' 00:15:42.585 10:53:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2050850 00:15:42.585 10:53:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:15:42.585 10:53:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:42.585 10:53:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2050850 00:15:42.585 10:53:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:42.585 10:53:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:42.585 10:53:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2050850' 00:15:42.585 killing process with pid 2050850 00:15:42.585 10:53:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2050850 00:15:42.585 10:53:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2050850 00:15:42.585 10:53:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:42.585 10:53:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:42.585 10:53:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:42.585 10:53:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:42.585 10:53:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:42.585 10:53:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.585 10:53:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:42.585 10:53:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.133 10:54:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:45.133 00:15:45.133 real 0m22.139s 00:15:45.133 user 0m25.590s 00:15:45.133 sys 0m6.761s 00:15:45.133 10:54:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:45.133 10:54:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:45.133 ************************************ 00:15:45.133 END TEST nvmf_queue_depth 00:15:45.133 ************************************ 00:15:45.133 10:54:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:45.133 10:54:01 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:45.133 10:54:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:45.133 10:54:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:45.133 10:54:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:45.133 ************************************ 00:15:45.133 START TEST nvmf_target_multipath 00:15:45.133 ************************************ 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:45.133 * Looking for test storage... 00:15:45.133 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:15:45.133 10:54:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:53.281 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:53.281 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:53.281 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:53.282 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:53.282 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:53.282 10:54:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:53.282 10:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:53.282 10:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:53.282 10:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:53.282 10:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:53.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:53.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:15:53.282 00:15:53.282 --- 10.0.0.2 ping statistics --- 00:15:53.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.282 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:15:53.282 10:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:53.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:53.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:15:53.282 00:15:53.282 --- 10.0.0.1 ping statistics --- 00:15:53.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.282 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:15:53.282 10:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:53.282 10:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:15:53.282 10:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:53.282 10:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:53.282 10:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:53.282 10:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:53.282 10:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:53.282 10:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:53.282 10:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:53.282 10:54:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:15:53.282 10:54:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:53.282 only one NIC for nvmf test 00:15:53.282 10:54:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:15:53.282 10:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:53.282 10:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:53.282 10:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:53.282 10:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:53.282 10:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:53.282 10:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:53.282 rmmod nvme_tcp 00:15:53.282 rmmod nvme_fabrics 00:15:53.282 rmmod nvme_keyring 00:15:53.282 10:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:53.282 10:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:53.282 10:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:53.282 10:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:53.282 10:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:53.282 10:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:53.282 10:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:53.282 10:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:53.282 10:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:53.282 10:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.282 10:54:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:53.282 10:54:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.668 10:54:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:54.668 10:54:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:15:54.668 10:54:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:15:54.668 10:54:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:54.668 10:54:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:54.668 10:54:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:54.668 10:54:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:54.668 10:54:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:54.668 10:54:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:54.668 10:54:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:54.668 10:54:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:54.668 10:54:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:54.668 10:54:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:54.668 10:54:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:54.668 10:54:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:54.668 10:54:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:54.668 10:54:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:54.668 10:54:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:54.668 10:54:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.668 10:54:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:54.668 10:54:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.668 10:54:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:54.668 00:15:54.668 real 0m9.699s 00:15:54.668 user 0m2.033s 00:15:54.668 sys 0m5.567s 00:15:54.668 10:54:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:54.668 10:54:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:54.668 ************************************ 00:15:54.668 END TEST nvmf_target_multipath 00:15:54.668 ************************************ 00:15:54.668 10:54:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:54.668 10:54:11 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:54.668 10:54:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:54.668 10:54:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:54.668 10:54:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:54.668 ************************************ 00:15:54.668 START TEST nvmf_zcopy 00:15:54.668 ************************************ 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:54.668 * Looking for test storage... 00:15:54.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:54.668 10:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:54.669 10:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.669 10:54:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:54.669 10:54:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.669 10:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:54.669 10:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:54.669 10:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:15:54.669 10:54:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:02.807 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:02.807 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:02.807 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:02.808 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:02.808 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:02.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:02.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.417 ms 00:16:02.808 00:16:02.808 --- 10.0.0.2 ping statistics --- 00:16:02.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.808 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:02.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:02.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:16:02.808 00:16:02.808 --- 10.0.0.1 ping statistics --- 00:16:02.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.808 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2062145 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2062145 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 2062145 ']' 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:02.808 10:54:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:02.808 [2024-07-12 10:54:18.976147] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:02.808 [2024-07-12 10:54:18.976208] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:02.808 EAL: No free 2048 kB hugepages reported on node 1 00:16:02.808 [2024-07-12 10:54:19.063474] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.808 [2024-07-12 10:54:19.156003] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:02.808 [2024-07-12 10:54:19.156059] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:02.808 [2024-07-12 10:54:19.156067] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:02.808 [2024-07-12 10:54:19.156074] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:02.808 [2024-07-12 10:54:19.156080] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:02.808 [2024-07-12 10:54:19.156111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:02.808 10:54:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:02.808 10:54:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:16:02.808 10:54:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:02.808 10:54:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:02.808 10:54:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:03.068 10:54:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:03.068 10:54:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:03.068 10:54:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:03.068 10:54:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.068 10:54:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:03.068 [2024-07-12 10:54:19.809287] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:03.068 10:54:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.068 10:54:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:03.068 10:54:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.068 10:54:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:03.068 10:54:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.069 10:54:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:03.069 10:54:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.069 10:54:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:03.069 [2024-07-12 10:54:19.833506] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:03.069 10:54:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.069 10:54:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:03.069 10:54:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.069 10:54:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:03.069 10:54:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.069 10:54:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:03.069 10:54:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.069 10:54:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:03.069 malloc0 00:16:03.069 10:54:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.069 10:54:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:03.069 10:54:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.069 10:54:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:03.069 10:54:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.069 10:54:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:03.069 10:54:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:03.069 10:54:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:03.069 10:54:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:03.069 10:54:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:03.069 10:54:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:03.069 { 00:16:03.069 "params": { 00:16:03.069 "name": "Nvme$subsystem", 00:16:03.069 "trtype": "$TEST_TRANSPORT", 00:16:03.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:03.069 "adrfam": "ipv4", 00:16:03.069 "trsvcid": "$NVMF_PORT", 00:16:03.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:03.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:03.069 "hdgst": ${hdgst:-false}, 00:16:03.069 "ddgst": ${ddgst:-false} 00:16:03.069 }, 00:16:03.069 "method": "bdev_nvme_attach_controller" 00:16:03.069 } 00:16:03.069 EOF 00:16:03.069 )") 00:16:03.069 10:54:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:03.069 10:54:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:03.069 10:54:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:03.069 10:54:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:03.069 "params": { 00:16:03.069 "name": "Nvme1", 00:16:03.069 "trtype": "tcp", 00:16:03.069 "traddr": "10.0.0.2", 00:16:03.069 "adrfam": "ipv4", 00:16:03.069 "trsvcid": "4420", 00:16:03.069 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:03.069 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:03.069 "hdgst": false, 00:16:03.069 "ddgst": false 00:16:03.069 }, 00:16:03.069 "method": "bdev_nvme_attach_controller" 00:16:03.069 }' 00:16:03.069 [2024-07-12 10:54:19.931424] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:03.069 [2024-07-12 10:54:19.931484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2062448 ] 00:16:03.069 EAL: No free 2048 kB hugepages reported on node 1 00:16:03.069 [2024-07-12 10:54:20.012836] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.330 [2024-07-12 10:54:20.117738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.330 Running I/O for 10 seconds... 00:16:15.561 00:16:15.561 Latency(us) 00:16:15.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:15.561 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:15.561 Verification LBA range: start 0x0 length 0x1000 00:16:15.561 Nvme1n1 : 10.05 9195.91 71.84 0.00 0.00 13823.67 2976.43 45438.29 00:16:15.561 =================================================================================================================== 00:16:15.561 Total : 9195.91 71.84 0.00 0.00 13823.67 2976.43 45438.29 00:16:15.561 10:54:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2064456 00:16:15.561 10:54:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:16:15.561 10:54:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:15.561 10:54:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:15.561 10:54:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:15.561 10:54:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:15.561 10:54:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:15.561 10:54:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:15.561 10:54:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:15.561 { 00:16:15.561 "params": { 00:16:15.561 "name": "Nvme$subsystem", 00:16:15.561 "trtype": "$TEST_TRANSPORT", 00:16:15.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:15.561 "adrfam": "ipv4", 00:16:15.561 "trsvcid": "$NVMF_PORT", 00:16:15.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:15.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:15.561 "hdgst": ${hdgst:-false}, 00:16:15.561 "ddgst": ${ddgst:-false} 00:16:15.561 }, 00:16:15.561 "method": "bdev_nvme_attach_controller" 00:16:15.561 } 00:16:15.561 EOF 00:16:15.561 )") 00:16:15.561 10:54:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:15.561 [2024-07-12 10:54:30.489283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.561 [2024-07-12 10:54:30.489310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.561 10:54:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:15.561 10:54:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:15.561 10:54:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:15.561 "params": { 00:16:15.561 "name": "Nvme1", 00:16:15.561 "trtype": "tcp", 00:16:15.561 "traddr": "10.0.0.2", 00:16:15.561 "adrfam": "ipv4", 00:16:15.561 "trsvcid": "4420", 00:16:15.561 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:15.561 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:15.561 "hdgst": false, 00:16:15.561 "ddgst": false 00:16:15.561 }, 00:16:15.561 "method": "bdev_nvme_attach_controller" 00:16:15.561 }' 00:16:15.561 [2024-07-12 10:54:30.501282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.561 [2024-07-12 10:54:30.501290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.561 [2024-07-12 10:54:30.513309] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.561 [2024-07-12 10:54:30.513316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.561 [2024-07-12 10:54:30.525341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.561 [2024-07-12 10:54:30.525348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.561 [2024-07-12 10:54:30.528519] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:15.561 [2024-07-12 10:54:30.528568] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2064456 ] 00:16:15.561 [2024-07-12 10:54:30.537371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.561 [2024-07-12 10:54:30.537378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.561 [2024-07-12 10:54:30.549401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.561 [2024-07-12 10:54:30.549408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.561 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.561 [2024-07-12 10:54:30.561433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.561 [2024-07-12 10:54:30.561440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.561 [2024-07-12 10:54:30.573464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.561 [2024-07-12 10:54:30.573471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.561 [2024-07-12 10:54:30.585496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.561 [2024-07-12 10:54:30.585503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.561 [2024-07-12 10:54:30.597528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.561 [2024-07-12 10:54:30.597534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.561 [2024-07-12 10:54:30.602647] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.561 [2024-07-12 10:54:30.609561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.561 [2024-07-12 10:54:30.609568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.561 [2024-07-12 10:54:30.621591] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.561 [2024-07-12 10:54:30.621598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.561 [2024-07-12 10:54:30.633623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.561 [2024-07-12 10:54:30.633635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.561 [2024-07-12 10:54:30.645652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.561 [2024-07-12 10:54:30.645661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.561 [2024-07-12 10:54:30.656940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.561 [2024-07-12 10:54:30.657682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.561 [2024-07-12 10:54:30.657689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.561 [2024-07-12 10:54:30.669717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.561 [2024-07-12 10:54:30.669726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.561 [2024-07-12 10:54:30.681747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.561 [2024-07-12 10:54:30.681759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.561 [2024-07-12 10:54:30.693775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.561 [2024-07-12 10:54:30.693782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.561 [2024-07-12 10:54:30.705807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.561 [2024-07-12 10:54:30.705814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.561 [2024-07-12 10:54:30.717836] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.561 [2024-07-12 10:54:30.717843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.561 [2024-07-12 10:54:30.729875] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.561 [2024-07-12 10:54:30.729888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.561 [2024-07-12 10:54:30.741903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.561 [2024-07-12 10:54:30.741911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.561 [2024-07-12 10:54:30.753934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.561 [2024-07-12 10:54:30.753943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.561 [2024-07-12 10:54:30.765967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.561 [2024-07-12 10:54:30.765974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.561 [2024-07-12 10:54:30.777997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.561 [2024-07-12 10:54:30.778003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.561 [2024-07-12 10:54:30.790031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.561 [2024-07-12 10:54:30.790040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.561 [2024-07-12 10:54:30.802062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.561 [2024-07-12 10:54:30.802071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.561 [2024-07-12 10:54:30.814094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.561 [2024-07-12 10:54:30.814100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.561 [2024-07-12 10:54:30.826129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.561 [2024-07-12 10:54:30.826135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:30.838160] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:30.838170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:30.850189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:30.850198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:30.862220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:30.862227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:30.874250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:30.874256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:30.886285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:30.886292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:30.898311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:30.898319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:30.910342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:30.910349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:30.922374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:30.922381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:30.934406] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:30.934413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:30.946445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:30.946459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 Running I/O for 5 seconds... 00:16:15.562 [2024-07-12 10:54:30.958471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:30.958479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:30.974312] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:30.974328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:30.987175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:30.987191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:30.999794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:30.999811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.012687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.012703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.026135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.026151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.038808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.038823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.052222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.052237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.065155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.065169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.078536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.078551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.091702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.091718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.105411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.105426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.117815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.117830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.131047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.131063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.144007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.144022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.156978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.156993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.169954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.169968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.183425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.183440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.196392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.196406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.209782] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.209796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.222920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.222935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.235382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.235397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.248511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.248526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.261833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.261848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.274626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.274641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.287757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.287772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.300809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.300823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.314070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.314085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.326927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.326942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.340387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.340402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.353671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.353686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.366349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.366364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.379570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.379584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.392689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.392705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.405308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.405323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.418673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.418688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.431628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.431644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.444988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.445003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.457536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.457551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.470479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.470494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.483697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.483712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.496781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.496796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.509317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.509332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.522561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.522575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.535977] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.535992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.548505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.562 [2024-07-12 10:54:31.548520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.562 [2024-07-12 10:54:31.561786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:31.561801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:31.574598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:31.574613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:31.587265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:31.587280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:31.600383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:31.600399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:31.613386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:31.613401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:31.626291] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:31.626305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:31.638915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:31.638930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:31.651603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:31.651619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:31.664446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:31.664460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:31.677334] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:31.677348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:31.690512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:31.690526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:31.703593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:31.703607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:31.716417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:31.716432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:31.729815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:31.729829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:31.743087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:31.743101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:31.756293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:31.756307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:31.769678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:31.769692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:31.782366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:31.782381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:31.795669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:31.795683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:31.808994] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:31.809011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:31.822148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:31.822162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:31.835542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:31.835556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:31.848590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:31.848605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:31.861444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:31.861459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:31.874692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:31.874706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:31.887498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:31.887513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:31.900408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:31.900423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:31.913168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:31.913183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:31.926551] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:31.926566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:31.939563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:31.939577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:31.952803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:31.952817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:31.965330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:31.965344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:31.978696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:31.978711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:31.992211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:31.992226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:32.005707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:32.005722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:32.018810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:32.018824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:32.032011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:32.032027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:32.044612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:32.044628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:32.057777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:32.057795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:32.069868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:32.069882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:32.083211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:32.083226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:32.096506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:32.096520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:32.110069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:32.110083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:32.123298] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:32.123313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:32.136994] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:32.137008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:32.150152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:32.150167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:32.162340] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:32.162354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:32.175543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:32.175557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:32.188743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:32.188758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:32.201923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:32.201938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:32.215092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:32.215106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:32.228585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:32.228600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:32.241531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:32.241545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:32.254747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:32.254762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:32.267479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:32.267493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:32.280461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:32.280476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:32.293714] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.563 [2024-07-12 10:54:32.293729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.563 [2024-07-12 10:54:32.306699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.564 [2024-07-12 10:54:32.306717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.564 [2024-07-12 10:54:32.319742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.564 [2024-07-12 10:54:32.319757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.564 [2024-07-12 10:54:32.332598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.564 [2024-07-12 10:54:32.332613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.564 [2024-07-12 10:54:32.344862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.564 [2024-07-12 10:54:32.344875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.564 [2024-07-12 10:54:32.358492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.564 [2024-07-12 10:54:32.358506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.564 [2024-07-12 10:54:32.372046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.564 [2024-07-12 10:54:32.372061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.564 [2024-07-12 10:54:32.385540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.564 [2024-07-12 10:54:32.385554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.564 [2024-07-12 10:54:32.398829] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.564 [2024-07-12 10:54:32.398844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.564 [2024-07-12 10:54:32.411623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.564 [2024-07-12 10:54:32.411637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.564 [2024-07-12 10:54:32.425092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.564 [2024-07-12 10:54:32.425106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.564 [2024-07-12 10:54:32.438199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.564 [2024-07-12 10:54:32.438213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.564 [2024-07-12 10:54:32.451541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.564 [2024-07-12 10:54:32.451556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.564 [2024-07-12 10:54:32.464578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.564 [2024-07-12 10:54:32.464592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.564 [2024-07-12 10:54:32.478310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.564 [2024-07-12 10:54:32.478325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.564 [2024-07-12 10:54:32.491615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.564 [2024-07-12 10:54:32.491629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.564 [2024-07-12 10:54:32.505247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.564 [2024-07-12 10:54:32.505261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.564 [2024-07-12 10:54:32.517632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.564 [2024-07-12 10:54:32.517646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.564 [2024-07-12 10:54:32.531104] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.564 [2024-07-12 10:54:32.531118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.825 [2024-07-12 10:54:32.544745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.825 [2024-07-12 10:54:32.544760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.825 [2024-07-12 10:54:32.556848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.825 [2024-07-12 10:54:32.556866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.825 [2024-07-12 10:54:32.569760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.825 [2024-07-12 10:54:32.569774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.825 [2024-07-12 10:54:32.582917] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.825 [2024-07-12 10:54:32.582932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.825 [2024-07-12 10:54:32.596421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.825 [2024-07-12 10:54:32.596435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.825 [2024-07-12 10:54:32.609594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.825 [2024-07-12 10:54:32.609609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.825 [2024-07-12 10:54:32.622943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.825 [2024-07-12 10:54:32.622957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.825 [2024-07-12 10:54:32.636101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.825 [2024-07-12 10:54:32.636116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.825 [2024-07-12 10:54:32.649594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.825 [2024-07-12 10:54:32.649609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.825 [2024-07-12 10:54:32.663036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.825 [2024-07-12 10:54:32.663050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.825 [2024-07-12 10:54:32.676448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.825 [2024-07-12 10:54:32.676463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.825 [2024-07-12 10:54:32.689692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.825 [2024-07-12 10:54:32.689707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.825 [2024-07-12 10:54:32.702162] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.825 [2024-07-12 10:54:32.702177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.825 [2024-07-12 10:54:32.715238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.825 [2024-07-12 10:54:32.715253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.825 [2024-07-12 10:54:32.728756] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.825 [2024-07-12 10:54:32.728771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.825 [2024-07-12 10:54:32.741543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.825 [2024-07-12 10:54:32.741558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.825 [2024-07-12 10:54:32.754468] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.825 [2024-07-12 10:54:32.754483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.825 [2024-07-12 10:54:32.767839] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.825 [2024-07-12 10:54:32.767854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.825 [2024-07-12 10:54:32.781257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.825 [2024-07-12 10:54:32.781271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.825 [2024-07-12 10:54:32.794602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.825 [2024-07-12 10:54:32.794617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.825 [2024-07-12 10:54:32.807660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.825 [2024-07-12 10:54:32.807679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.086 [2024-07-12 10:54:32.821101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.086 [2024-07-12 10:54:32.821116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.086 [2024-07-12 10:54:32.834057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.086 [2024-07-12 10:54:32.834072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.086 [2024-07-12 10:54:32.847112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.086 [2024-07-12 10:54:32.847131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.086 [2024-07-12 10:54:32.860477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.086 [2024-07-12 10:54:32.860492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.086 [2024-07-12 10:54:32.873862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.086 [2024-07-12 10:54:32.873876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.086 [2024-07-12 10:54:32.887198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.086 [2024-07-12 10:54:32.887213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.086 [2024-07-12 10:54:32.900639] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.086 [2024-07-12 10:54:32.900654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.086 [2024-07-12 10:54:32.913491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.086 [2024-07-12 10:54:32.913506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.086 [2024-07-12 10:54:32.926465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.086 [2024-07-12 10:54:32.926480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.087 [2024-07-12 10:54:32.938989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.087 [2024-07-12 10:54:32.939004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.087 [2024-07-12 10:54:32.952187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.087 [2024-07-12 10:54:32.952202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.087 [2024-07-12 10:54:32.965431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.087 [2024-07-12 10:54:32.965446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.087 [2024-07-12 10:54:32.978349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.087 [2024-07-12 10:54:32.978364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.087 [2024-07-12 10:54:32.991667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.087 [2024-07-12 10:54:32.991682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.087 [2024-07-12 10:54:33.005133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.087 [2024-07-12 10:54:33.005148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.087 [2024-07-12 10:54:33.018056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.087 [2024-07-12 10:54:33.018072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.087 [2024-07-12 10:54:33.031545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.087 [2024-07-12 10:54:33.031560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.087 [2024-07-12 10:54:33.045158] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.087 [2024-07-12 10:54:33.045173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.087 [2024-07-12 10:54:33.058113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.087 [2024-07-12 10:54:33.058133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.346 [2024-07-12 10:54:33.070926] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.346 [2024-07-12 10:54:33.070941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.346 [2024-07-12 10:54:33.083971] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.346 [2024-07-12 10:54:33.083986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.346 [2024-07-12 10:54:33.096342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.346 [2024-07-12 10:54:33.096357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.346 [2024-07-12 10:54:33.109740] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.346 [2024-07-12 10:54:33.109754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.346 [2024-07-12 10:54:33.122807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.346 [2024-07-12 10:54:33.122822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.346 [2024-07-12 10:54:33.135961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.346 [2024-07-12 10:54:33.135975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.346 [2024-07-12 10:54:33.149099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.346 [2024-07-12 10:54:33.149113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.346 [2024-07-12 10:54:33.162176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.346 [2024-07-12 10:54:33.162191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.346 [2024-07-12 10:54:33.174923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.346 [2024-07-12 10:54:33.174938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.346 [2024-07-12 10:54:33.187478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.346 [2024-07-12 10:54:33.187493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.346 [2024-07-12 10:54:33.200099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.346 [2024-07-12 10:54:33.200114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.346 [2024-07-12 10:54:33.212793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.346 [2024-07-12 10:54:33.212808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.346 [2024-07-12 10:54:33.225703] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.346 [2024-07-12 10:54:33.225718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.346 [2024-07-12 10:54:33.238810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.346 [2024-07-12 10:54:33.238825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.346 [2024-07-12 10:54:33.252011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.346 [2024-07-12 10:54:33.252025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.346 [2024-07-12 10:54:33.264853] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.346 [2024-07-12 10:54:33.264868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.346 [2024-07-12 10:54:33.277882] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.346 [2024-07-12 10:54:33.277897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.346 [2024-07-12 10:54:33.290535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.346 [2024-07-12 10:54:33.290550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.346 [2024-07-12 10:54:33.303627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.346 [2024-07-12 10:54:33.303642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.346 [2024-07-12 10:54:33.316823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.346 [2024-07-12 10:54:33.316837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.346 [2024-07-12 10:54:33.330323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.346 [2024-07-12 10:54:33.330338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.606 [2024-07-12 10:54:33.343443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.606 [2024-07-12 10:54:33.343459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.606 [2024-07-12 10:54:33.356512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.606 [2024-07-12 10:54:33.356527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.606 [2024-07-12 10:54:33.369768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.606 [2024-07-12 10:54:33.369783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.606 [2024-07-12 10:54:33.383235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.606 [2024-07-12 10:54:33.383250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.606 [2024-07-12 10:54:33.396569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.606 [2024-07-12 10:54:33.396584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.606 [2024-07-12 10:54:33.409978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.606 [2024-07-12 10:54:33.409992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.606 [2024-07-12 10:54:33.423139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.606 [2024-07-12 10:54:33.423154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.606 [2024-07-12 10:54:33.436458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.606 [2024-07-12 10:54:33.436472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.606 [2024-07-12 10:54:33.449115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.606 [2024-07-12 10:54:33.449134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.606 [2024-07-12 10:54:33.462416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.606 [2024-07-12 10:54:33.462430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.606 [2024-07-12 10:54:33.476061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.607 [2024-07-12 10:54:33.476075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.607 [2024-07-12 10:54:33.489777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.607 [2024-07-12 10:54:33.489792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.607 [2024-07-12 10:54:33.503204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.607 [2024-07-12 10:54:33.503219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.607 [2024-07-12 10:54:33.516131] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.607 [2024-07-12 10:54:33.516145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.607 [2024-07-12 10:54:33.529327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.607 [2024-07-12 10:54:33.529342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.607 [2024-07-12 10:54:33.542484] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.607 [2024-07-12 10:54:33.542499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.607 [2024-07-12 10:54:33.555197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.607 [2024-07-12 10:54:33.555212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.607 [2024-07-12 10:54:33.567739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.607 [2024-07-12 10:54:33.567754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.607 [2024-07-12 10:54:33.581016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.607 [2024-07-12 10:54:33.581030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.867 [2024-07-12 10:54:33.594740] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.867 [2024-07-12 10:54:33.594755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.867 [2024-07-12 10:54:33.607533] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.867 [2024-07-12 10:54:33.607547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.867 [2024-07-12 10:54:33.620342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.867 [2024-07-12 10:54:33.620357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.867 [2024-07-12 10:54:33.632950] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.867 [2024-07-12 10:54:33.632965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.867 [2024-07-12 10:54:33.646000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.867 [2024-07-12 10:54:33.646014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.867 [2024-07-12 10:54:33.659576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.867 [2024-07-12 10:54:33.659590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.867 [2024-07-12 10:54:33.672108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.867 [2024-07-12 10:54:33.672127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.867 [2024-07-12 10:54:33.685516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.867 [2024-07-12 10:54:33.685530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.867 [2024-07-12 10:54:33.698939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.867 [2024-07-12 10:54:33.698953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.867 [2024-07-12 10:54:33.712139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.867 [2024-07-12 10:54:33.712154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.867 [2024-07-12 10:54:33.725280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.867 [2024-07-12 10:54:33.725294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.867 [2024-07-12 10:54:33.737963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.867 [2024-07-12 10:54:33.737978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.867 [2024-07-12 10:54:33.750629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.867 [2024-07-12 10:54:33.750643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.867 [2024-07-12 10:54:33.764028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.867 [2024-07-12 10:54:33.764043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.867 [2024-07-12 10:54:33.777308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.867 [2024-07-12 10:54:33.777323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.867 [2024-07-12 10:54:33.790263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.867 [2024-07-12 10:54:33.790282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.867 [2024-07-12 10:54:33.803454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.867 [2024-07-12 10:54:33.803468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.867 [2024-07-12 10:54:33.816953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.867 [2024-07-12 10:54:33.816967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.867 [2024-07-12 10:54:33.830138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.867 [2024-07-12 10:54:33.830152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.867 [2024-07-12 10:54:33.842990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.867 [2024-07-12 10:54:33.843004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.127 [2024-07-12 10:54:33.856105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.127 [2024-07-12 10:54:33.856119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.127 [2024-07-12 10:54:33.868938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.127 [2024-07-12 10:54:33.868952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.127 [2024-07-12 10:54:33.881842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.127 [2024-07-12 10:54:33.881856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.127 [2024-07-12 10:54:33.895230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.127 [2024-07-12 10:54:33.895244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.127 [2024-07-12 10:54:33.908602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.127 [2024-07-12 10:54:33.908616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.127 [2024-07-12 10:54:33.921440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.127 [2024-07-12 10:54:33.921454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.127 [2024-07-12 10:54:33.933881] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.127 [2024-07-12 10:54:33.933895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.127 [2024-07-12 10:54:33.946827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.127 [2024-07-12 10:54:33.946841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.127 [2024-07-12 10:54:33.960075] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.127 [2024-07-12 10:54:33.960089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.127 [2024-07-12 10:54:33.972923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.127 [2024-07-12 10:54:33.972937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.127 [2024-07-12 10:54:33.985878] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.127 [2024-07-12 10:54:33.985892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.127 [2024-07-12 10:54:33.999167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.127 [2024-07-12 10:54:33.999182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.127 [2024-07-12 10:54:34.012470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.127 [2024-07-12 10:54:34.012484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.127 [2024-07-12 10:54:34.025215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.127 [2024-07-12 10:54:34.025229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.127 [2024-07-12 10:54:34.038079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.127 [2024-07-12 10:54:34.038096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.127 [2024-07-12 10:54:34.051072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.127 [2024-07-12 10:54:34.051086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.127 [2024-07-12 10:54:34.064006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.127 [2024-07-12 10:54:34.064020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.127 [2024-07-12 10:54:34.077482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.127 [2024-07-12 10:54:34.077497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.127 [2024-07-12 10:54:34.090138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.127 [2024-07-12 10:54:34.090153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.127 [2024-07-12 10:54:34.102423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.127 [2024-07-12 10:54:34.102438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.388 [2024-07-12 10:54:34.115450] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.388 [2024-07-12 10:54:34.115465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.388 [2024-07-12 10:54:34.128634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.388 [2024-07-12 10:54:34.128648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.388 [2024-07-12 10:54:34.141952] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.388 [2024-07-12 10:54:34.141967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.388 [2024-07-12 10:54:34.155686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.388 [2024-07-12 10:54:34.155700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.388 [2024-07-12 10:54:34.169254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.388 [2024-07-12 10:54:34.169268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.388 [2024-07-12 10:54:34.181620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.388 [2024-07-12 10:54:34.181634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.388 [2024-07-12 10:54:34.194583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.388 [2024-07-12 10:54:34.194597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.388 [2024-07-12 10:54:34.207853] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.388 [2024-07-12 10:54:34.207868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.388 [2024-07-12 10:54:34.221095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.388 [2024-07-12 10:54:34.221109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.388 [2024-07-12 10:54:34.234720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.388 [2024-07-12 10:54:34.234734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.388 [2024-07-12 10:54:34.247054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.388 [2024-07-12 10:54:34.247068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.388 [2024-07-12 10:54:34.259892] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.388 [2024-07-12 10:54:34.259907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.388 [2024-07-12 10:54:34.272898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.388 [2024-07-12 10:54:34.272912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.388 [2024-07-12 10:54:34.286007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.388 [2024-07-12 10:54:34.286025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.388 [2024-07-12 10:54:34.299343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.388 [2024-07-12 10:54:34.299358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.388 [2024-07-12 10:54:34.312555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.388 [2024-07-12 10:54:34.312569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.388 [2024-07-12 10:54:34.325888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.388 [2024-07-12 10:54:34.325902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.388 [2024-07-12 10:54:34.339369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.388 [2024-07-12 10:54:34.339383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.388 [2024-07-12 10:54:34.352541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.388 [2024-07-12 10:54:34.352555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.388 [2024-07-12 10:54:34.365451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.388 [2024-07-12 10:54:34.365465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.649 [2024-07-12 10:54:34.378967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.649 [2024-07-12 10:54:34.378983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.649 [2024-07-12 10:54:34.391891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.649 [2024-07-12 10:54:34.391906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.649 [2024-07-12 10:54:34.404414] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.649 [2024-07-12 10:54:34.404430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.649 [2024-07-12 10:54:34.416861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.649 [2024-07-12 10:54:34.416876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.649 [2024-07-12 10:54:34.429623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.649 [2024-07-12 10:54:34.429638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.649 [2024-07-12 10:54:34.442699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.649 [2024-07-12 10:54:34.442715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.649 [2024-07-12 10:54:34.456176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.649 [2024-07-12 10:54:34.456192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.649 [2024-07-12 10:54:34.469417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.649 [2024-07-12 10:54:34.469432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.649 [2024-07-12 10:54:34.482712] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.649 [2024-07-12 10:54:34.482726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.649 [2024-07-12 10:54:34.495349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.649 [2024-07-12 10:54:34.495364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.649 [2024-07-12 10:54:34.508664] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.649 [2024-07-12 10:54:34.508679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.649 [2024-07-12 10:54:34.521748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.649 [2024-07-12 10:54:34.521763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.649 [2024-07-12 10:54:34.534090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.649 [2024-07-12 10:54:34.534109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.649 [2024-07-12 10:54:34.547511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.649 [2024-07-12 10:54:34.547526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.649 [2024-07-12 10:54:34.560451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.649 [2024-07-12 10:54:34.560466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.649 [2024-07-12 10:54:34.573551] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.649 [2024-07-12 10:54:34.573566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.649 [2024-07-12 10:54:34.586267] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.649 [2024-07-12 10:54:34.586282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.649 [2024-07-12 10:54:34.599211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.649 [2024-07-12 10:54:34.599226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.649 [2024-07-12 10:54:34.611815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.649 [2024-07-12 10:54:34.611829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.649 [2024-07-12 10:54:34.624808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.649 [2024-07-12 10:54:34.624823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.910 [2024-07-12 10:54:34.637421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.910 [2024-07-12 10:54:34.637436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.910 [2024-07-12 10:54:34.651072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.910 [2024-07-12 10:54:34.651086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.910 [2024-07-12 10:54:34.664275] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.910 [2024-07-12 10:54:34.664290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.910 [2024-07-12 10:54:34.677604] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.910 [2024-07-12 10:54:34.677619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.910 [2024-07-12 10:54:34.691024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.910 [2024-07-12 10:54:34.691039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.910 [2024-07-12 10:54:34.703562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.910 [2024-07-12 10:54:34.703576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.910 [2024-07-12 10:54:34.716885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.911 [2024-07-12 10:54:34.716900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.911 [2024-07-12 10:54:34.730146] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.911 [2024-07-12 10:54:34.730161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.911 [2024-07-12 10:54:34.743467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.911 [2024-07-12 10:54:34.743481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.911 [2024-07-12 10:54:34.757152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.911 [2024-07-12 10:54:34.757167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.911 [2024-07-12 10:54:34.770441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.911 [2024-07-12 10:54:34.770455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.911 [2024-07-12 10:54:34.783753] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.911 [2024-07-12 10:54:34.783775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.911 [2024-07-12 10:54:34.796269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.911 [2024-07-12 10:54:34.796283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.911 [2024-07-12 10:54:34.809265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.911 [2024-07-12 10:54:34.809280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.911 [2024-07-12 10:54:34.821691] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.911 [2024-07-12 10:54:34.821706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.911 [2024-07-12 10:54:34.834572] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.911 [2024-07-12 10:54:34.834587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.911 [2024-07-12 10:54:34.847837] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.911 [2024-07-12 10:54:34.847852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.911 [2024-07-12 10:54:34.861436] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.911 [2024-07-12 10:54:34.861451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.911 [2024-07-12 10:54:34.874142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.911 [2024-07-12 10:54:34.874157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.911 [2024-07-12 10:54:34.887653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.911 [2024-07-12 10:54:34.887667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.172 [2024-07-12 10:54:34.900207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.172 [2024-07-12 10:54:34.900222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.172 [2024-07-12 10:54:34.913381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.172 [2024-07-12 10:54:34.913396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.172 [2024-07-12 10:54:34.926650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.172 [2024-07-12 10:54:34.926664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.172 [2024-07-12 10:54:34.939829] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.172 [2024-07-12 10:54:34.939844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.172 [2024-07-12 10:54:34.952923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.172 [2024-07-12 10:54:34.952938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.172 [2024-07-12 10:54:34.965579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.172 [2024-07-12 10:54:34.965594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.172 [2024-07-12 10:54:34.978466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.172 [2024-07-12 10:54:34.978480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.172 [2024-07-12 10:54:34.991735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.172 [2024-07-12 10:54:34.991750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.172 [2024-07-12 10:54:35.005106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.172 [2024-07-12 10:54:35.005121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.172 [2024-07-12 10:54:35.018174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.172 [2024-07-12 10:54:35.018189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.172 [2024-07-12 10:54:35.031194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.172 [2024-07-12 10:54:35.031210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.172 [2024-07-12 10:54:35.043716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.172 [2024-07-12 10:54:35.043731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.172 [2024-07-12 10:54:35.056809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.172 [2024-07-12 10:54:35.056824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.172 [2024-07-12 10:54:35.070580] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.172 [2024-07-12 10:54:35.070595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.172 [2024-07-12 10:54:35.083225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.172 [2024-07-12 10:54:35.083240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.172 [2024-07-12 10:54:35.096622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.172 [2024-07-12 10:54:35.096637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.172 [2024-07-12 10:54:35.109884] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.172 [2024-07-12 10:54:35.109899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.172 [2024-07-12 10:54:35.123386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.172 [2024-07-12 10:54:35.123400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.172 [2024-07-12 10:54:35.136794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.172 [2024-07-12 10:54:35.136808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.172 [2024-07-12 10:54:35.150188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.172 [2024-07-12 10:54:35.150202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.434 [2024-07-12 10:54:35.162937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.434 [2024-07-12 10:54:35.162951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.434 [2024-07-12 10:54:35.175935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.434 [2024-07-12 10:54:35.175948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.434 [2024-07-12 10:54:35.188260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.434 [2024-07-12 10:54:35.188275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.434 [2024-07-12 10:54:35.201590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.434 [2024-07-12 10:54:35.201604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.434 [2024-07-12 10:54:35.214250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.434 [2024-07-12 10:54:35.214264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.434 [2024-07-12 10:54:35.227386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.434 [2024-07-12 10:54:35.227400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.434 [2024-07-12 10:54:35.240631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.434 [2024-07-12 10:54:35.240645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.434 [2024-07-12 10:54:35.253654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.434 [2024-07-12 10:54:35.253669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.434 [2024-07-12 10:54:35.267018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.434 [2024-07-12 10:54:35.267032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.434 [2024-07-12 10:54:35.279675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.434 [2024-07-12 10:54:35.279690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.434 [2024-07-12 10:54:35.293025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.434 [2024-07-12 10:54:35.293040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.434 [2024-07-12 10:54:35.306350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.434 [2024-07-12 10:54:35.306364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.434 [2024-07-12 10:54:35.319084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.434 [2024-07-12 10:54:35.319099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.434 [2024-07-12 10:54:35.332770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.434 [2024-07-12 10:54:35.332785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.434 [2024-07-12 10:54:35.345723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.434 [2024-07-12 10:54:35.345738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.434 [2024-07-12 10:54:35.358877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.434 [2024-07-12 10:54:35.358891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.434 [2024-07-12 10:54:35.372280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.434 [2024-07-12 10:54:35.372294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.434 [2024-07-12 10:54:35.385770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.434 [2024-07-12 10:54:35.385785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.434 [2024-07-12 10:54:35.399247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.434 [2024-07-12 10:54:35.399261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.434 [2024-07-12 10:54:35.412227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.435 [2024-07-12 10:54:35.412241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.696 [2024-07-12 10:54:35.425891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.696 [2024-07-12 10:54:35.425906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.696 [2024-07-12 10:54:35.439147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.696 [2024-07-12 10:54:35.439162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.696 [2024-07-12 10:54:35.452488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.696 [2024-07-12 10:54:35.452503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.696 [2024-07-12 10:54:35.465889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.696 [2024-07-12 10:54:35.465904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.696 [2024-07-12 10:54:35.479381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.696 [2024-07-12 10:54:35.479395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.696 [2024-07-12 10:54:35.492853] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.696 [2024-07-12 10:54:35.492868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.696 [2024-07-12 10:54:35.505535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.696 [2024-07-12 10:54:35.505549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.696 [2024-07-12 10:54:35.518649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.696 [2024-07-12 10:54:35.518663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.696 [2024-07-12 10:54:35.531925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.696 [2024-07-12 10:54:35.531940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.696 [2024-07-12 10:54:35.545410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.696 [2024-07-12 10:54:35.545425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.696 [2024-07-12 10:54:35.558401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.696 [2024-07-12 10:54:35.558417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.696 [2024-07-12 10:54:35.570905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.696 [2024-07-12 10:54:35.570919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.696 [2024-07-12 10:54:35.584364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.696 [2024-07-12 10:54:35.584378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.696 [2024-07-12 10:54:35.596911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.696 [2024-07-12 10:54:35.596925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.696 [2024-07-12 10:54:35.609903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.696 [2024-07-12 10:54:35.609917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.696 [2024-07-12 10:54:35.622799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.696 [2024-07-12 10:54:35.622813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.696 [2024-07-12 10:54:35.636039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.696 [2024-07-12 10:54:35.636053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.696 [2024-07-12 10:54:35.649729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.696 [2024-07-12 10:54:35.649743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.696 [2024-07-12 10:54:35.662468] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.696 [2024-07-12 10:54:35.662483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.696 [2024-07-12 10:54:35.674765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.696 [2024-07-12 10:54:35.674779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.958 [2024-07-12 10:54:35.687920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.958 [2024-07-12 10:54:35.687935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.958 [2024-07-12 10:54:35.701523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.958 [2024-07-12 10:54:35.701537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.958 [2024-07-12 10:54:35.715052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.958 [2024-07-12 10:54:35.715066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.958 [2024-07-12 10:54:35.728743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.958 [2024-07-12 10:54:35.728757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.958 [2024-07-12 10:54:35.742065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.958 [2024-07-12 10:54:35.742079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.958 [2024-07-12 10:54:35.755393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.958 [2024-07-12 10:54:35.755407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.958 [2024-07-12 10:54:35.768846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.958 [2024-07-12 10:54:35.768863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.958 [2024-07-12 10:54:35.782029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.958 [2024-07-12 10:54:35.782043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.958 [2024-07-12 10:54:35.795105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.958 [2024-07-12 10:54:35.795120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.958 [2024-07-12 10:54:35.808184] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.958 [2024-07-12 10:54:35.808198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.958 [2024-07-12 10:54:35.821487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.958 [2024-07-12 10:54:35.821501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.958 [2024-07-12 10:54:35.834389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.958 [2024-07-12 10:54:35.834404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.958 [2024-07-12 10:54:35.847539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.958 [2024-07-12 10:54:35.847554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.958 [2024-07-12 10:54:35.860165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.958 [2024-07-12 10:54:35.860179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.958 [2024-07-12 10:54:35.873449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.958 [2024-07-12 10:54:35.873464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.958 [2024-07-12 10:54:35.886766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.958 [2024-07-12 10:54:35.886781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.958 [2024-07-12 10:54:35.899398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.958 [2024-07-12 10:54:35.899413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.958 [2024-07-12 10:54:35.912598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.958 [2024-07-12 10:54:35.912612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.958 [2024-07-12 10:54:35.925977] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.958 [2024-07-12 10:54:35.925992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.958 [2024-07-12 10:54:35.939543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.958 [2024-07-12 10:54:35.939558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.220 [2024-07-12 10:54:35.953070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.220 [2024-07-12 10:54:35.953084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.220 [2024-07-12 10:54:35.966154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.220 [2024-07-12 10:54:35.966169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.220 00:16:19.220 Latency(us) 00:16:19.220 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.220 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:19.220 Nvme1n1 : 5.00 19443.65 151.90 0.00 0.00 6577.77 2798.93 16165.55 00:16:19.220 =================================================================================================================== 00:16:19.220 Total : 19443.65 151.90 0.00 0.00 6577.77 2798.93 16165.55 00:16:19.220 [2024-07-12 10:54:35.975970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.220 [2024-07-12 10:54:35.975987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.220 [2024-07-12 10:54:35.987984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.220 [2024-07-12 10:54:35.987995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.220 [2024-07-12 10:54:36.000019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.220 [2024-07-12 10:54:36.000030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.220 [2024-07-12 10:54:36.012048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.220 [2024-07-12 10:54:36.012059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.220 [2024-07-12 10:54:36.024077] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.220 [2024-07-12 10:54:36.024087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.220 [2024-07-12 10:54:36.036105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.220 [2024-07-12 10:54:36.036115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.220 [2024-07-12 10:54:36.048138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.220 [2024-07-12 10:54:36.048144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.220 [2024-07-12 10:54:36.060171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.220 [2024-07-12 10:54:36.060182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.220 [2024-07-12 10:54:36.072202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.220 [2024-07-12 10:54:36.072212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.220 [2024-07-12 10:54:36.084228] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.220 [2024-07-12 10:54:36.084235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2064456) - No such process 00:16:19.220 10:54:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2064456 00:16:19.220 10:54:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:19.220 10:54:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.220 10:54:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:19.220 10:54:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.220 10:54:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:19.220 10:54:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.220 10:54:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:19.220 delay0 00:16:19.220 10:54:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.220 10:54:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:19.220 10:54:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.220 10:54:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:19.220 10:54:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.220 10:54:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:19.220 EAL: No free 2048 kB hugepages reported on node 1 00:16:19.220 [2024-07-12 10:54:36.195833] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:25.819 Initializing NVMe Controllers 00:16:25.819 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:25.819 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:25.819 Initialization complete. Launching workers. 00:16:25.819 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 157 00:16:25.819 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 438, failed to submit 39 00:16:25.819 success 266, unsuccess 172, failed 0 00:16:25.819 10:54:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:25.819 10:54:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:16:25.819 10:54:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:25.820 10:54:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:16:25.820 10:54:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:25.820 10:54:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:16:25.820 10:54:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:25.820 10:54:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:25.820 rmmod nvme_tcp 00:16:25.820 rmmod nvme_fabrics 00:16:25.820 rmmod nvme_keyring 00:16:25.820 10:54:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:25.820 10:54:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:16:25.820 10:54:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:16:25.820 10:54:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2062145 ']' 00:16:25.820 10:54:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2062145 00:16:25.820 10:54:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 2062145 ']' 00:16:25.820 10:54:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 2062145 00:16:25.820 10:54:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:16:25.820 10:54:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:25.820 10:54:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2062145 00:16:25.820 10:54:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:25.820 10:54:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:25.820 10:54:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2062145' 00:16:25.820 killing process with pid 2062145 00:16:25.820 10:54:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 2062145 00:16:25.820 10:54:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 2062145 00:16:25.820 10:54:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:25.820 10:54:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:25.820 10:54:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:25.820 10:54:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:25.820 10:54:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:25.820 10:54:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.820 10:54:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:25.820 10:54:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.733 10:54:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:27.733 00:16:27.733 real 0m33.172s 00:16:27.733 user 0m44.577s 00:16:27.733 sys 0m10.331s 00:16:27.733 10:54:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:27.733 10:54:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:27.733 ************************************ 00:16:27.733 END TEST nvmf_zcopy 00:16:27.733 ************************************ 00:16:27.733 10:54:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:27.733 10:54:44 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:27.733 10:54:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:27.733 10:54:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:27.733 10:54:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:27.733 ************************************ 00:16:27.733 START TEST nvmf_nmic 00:16:27.733 ************************************ 00:16:27.733 10:54:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:27.995 * Looking for test storage... 00:16:27.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:27.995 10:54:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:27.995 10:54:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:16:27.995 10:54:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:27.995 10:54:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:27.995 10:54:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:27.995 10:54:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:27.995 10:54:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:27.995 10:54:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:27.995 10:54:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:27.995 10:54:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:27.995 10:54:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:27.995 10:54:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:27.995 10:54:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:27.995 10:54:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:27.995 10:54:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:27.995 10:54:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:27.995 10:54:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:27.995 10:54:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:27.995 10:54:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:27.995 10:54:44 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:27.995 10:54:44 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:27.995 10:54:44 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:27.995 10:54:44 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.995 10:54:44 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.996 10:54:44 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.996 10:54:44 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:16:27.996 10:54:44 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.996 10:54:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:16:27.996 10:54:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:27.996 10:54:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:27.996 10:54:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:27.996 10:54:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:27.996 10:54:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:27.996 10:54:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:27.996 10:54:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:27.996 10:54:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:27.996 10:54:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:27.996 10:54:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:27.996 10:54:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:16:27.996 10:54:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:27.996 10:54:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:27.996 10:54:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:27.996 10:54:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:27.996 10:54:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:27.996 10:54:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.996 10:54:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:27.996 10:54:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.996 10:54:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:27.996 10:54:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:27.996 10:54:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:16:27.996 10:54:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:36.174 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:36.174 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:36.174 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:36.174 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:36.174 10:54:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:36.174 10:54:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:36.174 10:54:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:36.174 10:54:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:36.174 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:36.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:16:36.174 00:16:36.174 --- 10.0.0.2 ping statistics --- 00:16:36.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.174 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:16:36.174 10:54:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:36.174 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:36.174 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.364 ms 00:16:36.174 00:16:36.174 --- 10.0.0.1 ping statistics --- 00:16:36.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.174 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:16:36.174 10:54:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:36.174 10:54:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:16:36.174 10:54:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:36.174 10:54:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:36.174 10:54:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:36.174 10:54:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:36.174 10:54:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:36.174 10:54:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:36.174 10:54:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:36.174 10:54:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:36.174 10:54:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:36.174 10:54:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:36.174 10:54:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:36.174 10:54:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2070826 00:16:36.174 10:54:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2070826 00:16:36.174 10:54:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:36.174 10:54:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 2070826 ']' 00:16:36.174 10:54:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.174 10:54:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:36.174 10:54:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.174 10:54:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:36.174 10:54:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:36.174 [2024-07-12 10:54:52.172327] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:36.174 [2024-07-12 10:54:52.172392] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:36.174 EAL: No free 2048 kB hugepages reported on node 1 00:16:36.174 [2024-07-12 10:54:52.260464] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:36.174 [2024-07-12 10:54:52.359602] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:36.174 [2024-07-12 10:54:52.359662] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:36.174 [2024-07-12 10:54:52.359671] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:36.174 [2024-07-12 10:54:52.359677] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:36.174 [2024-07-12 10:54:52.359684] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:36.175 [2024-07-12 10:54:52.359845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:36.175 [2024-07-12 10:54:52.359989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:36.175 [2024-07-12 10:54:52.360168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:36.175 [2024-07-12 10:54:52.360169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.175 10:54:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:36.175 10:54:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:16:36.175 10:54:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:36.175 10:54:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:36.175 10:54:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:36.175 [2024-07-12 10:54:53.017477] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:36.175 Malloc0 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:36.175 [2024-07-12 10:54:53.083237] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:36.175 test case1: single bdev can't be used in multiple subsystems 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:36.175 [2024-07-12 10:54:53.119111] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:36.175 [2024-07-12 10:54:53.119138] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:36.175 [2024-07-12 10:54:53.119146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.175 request: 00:16:36.175 { 00:16:36.175 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:36.175 "namespace": { 00:16:36.175 "bdev_name": "Malloc0", 00:16:36.175 "no_auto_visible": false 00:16:36.175 }, 00:16:36.175 "method": "nvmf_subsystem_add_ns", 00:16:36.175 "req_id": 1 00:16:36.175 } 00:16:36.175 Got JSON-RPC error response 00:16:36.175 response: 00:16:36.175 { 00:16:36.175 "code": -32602, 00:16:36.175 "message": "Invalid parameters" 00:16:36.175 } 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:36.175 Adding namespace failed - expected result. 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:36.175 test case2: host connect to nvmf target in multiple paths 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:36.175 [2024-07-12 10:54:53.131286] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.175 10:54:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:38.088 10:54:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:39.471 10:54:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:39.471 10:54:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:16:39.471 10:54:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:39.471 10:54:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:39.471 10:54:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:16:41.384 10:54:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:41.384 10:54:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:41.384 10:54:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:41.384 10:54:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:41.384 10:54:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:41.384 10:54:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:16:41.384 10:54:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:41.384 [global] 00:16:41.384 thread=1 00:16:41.384 invalidate=1 00:16:41.384 rw=write 00:16:41.384 time_based=1 00:16:41.384 runtime=1 00:16:41.384 ioengine=libaio 00:16:41.384 direct=1 00:16:41.384 bs=4096 00:16:41.384 iodepth=1 00:16:41.384 norandommap=0 00:16:41.384 numjobs=1 00:16:41.384 00:16:41.384 verify_dump=1 00:16:41.384 verify_backlog=512 00:16:41.384 verify_state_save=0 00:16:41.384 do_verify=1 00:16:41.384 verify=crc32c-intel 00:16:41.384 [job0] 00:16:41.384 filename=/dev/nvme0n1 00:16:41.384 Could not set queue depth (nvme0n1) 00:16:41.644 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:41.644 fio-3.35 00:16:41.644 Starting 1 thread 00:16:43.029 00:16:43.029 job0: (groupid=0, jobs=1): err= 0: pid=2072334: Fri Jul 12 10:54:59 2024 00:16:43.029 read: IOPS=486, BW=1946KiB/s (1993kB/s)(1948KiB/1001msec) 00:16:43.029 slat (nsec): min=8134, max=62386, avg=27012.26, stdev=4198.00 00:16:43.029 clat (usec): min=817, max=1427, avg=1185.19, stdev=65.06 00:16:43.029 lat (usec): min=825, max=1454, avg=1212.20, stdev=65.88 00:16:43.029 clat percentiles (usec): 00:16:43.029 | 1.00th=[ 971], 5.00th=[ 1074], 10.00th=[ 1106], 20.00th=[ 1139], 00:16:43.030 | 30.00th=[ 1156], 40.00th=[ 1172], 50.00th=[ 1188], 60.00th=[ 1205], 00:16:43.030 | 70.00th=[ 1221], 80.00th=[ 1237], 90.00th=[ 1254], 95.00th=[ 1270], 00:16:43.030 | 99.00th=[ 1336], 99.50th=[ 1352], 99.90th=[ 1434], 99.95th=[ 1434], 00:16:43.030 | 99.99th=[ 1434] 00:16:43.030 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:16:43.030 slat (usec): min=9, max=27965, avg=84.90, stdev=1234.62 00:16:43.030 clat (usec): min=462, max=880, avg=699.15, stdev=80.51 00:16:43.030 lat (usec): min=472, max=28693, avg=784.06, stdev=1238.86 00:16:43.030 clat percentiles (usec): 00:16:43.030 | 1.00th=[ 510], 5.00th=[ 537], 10.00th=[ 578], 20.00th=[ 635], 00:16:43.030 | 30.00th=[ 668], 40.00th=[ 693], 50.00th=[ 701], 60.00th=[ 734], 00:16:43.030 | 70.00th=[ 750], 80.00th=[ 775], 90.00th=[ 791], 95.00th=[ 807], 00:16:43.030 | 99.00th=[ 857], 99.50th=[ 865], 99.90th=[ 881], 99.95th=[ 881], 00:16:43.030 | 99.99th=[ 881] 00:16:43.030 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:16:43.030 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:43.030 lat (usec) : 500=0.40%, 750=36.14%, 1000=15.22% 00:16:43.030 lat (msec) : 2=48.25% 00:16:43.030 cpu : usr=1.90%, sys=4.10%, ctx=1002, majf=0, minf=1 00:16:43.030 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:43.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.030 issued rwts: total=487,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:43.030 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:43.030 00:16:43.030 Run status group 0 (all jobs): 00:16:43.030 READ: bw=1946KiB/s (1993kB/s), 1946KiB/s-1946KiB/s (1993kB/s-1993kB/s), io=1948KiB (1995kB), run=1001-1001msec 00:16:43.030 WRITE: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:16:43.030 00:16:43.030 Disk stats (read/write): 00:16:43.030 nvme0n1: ios=429/512, merge=0/0, ticks=1414/319, in_queue=1733, util=99.00% 00:16:43.030 10:54:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:43.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:43.030 10:54:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:43.030 10:54:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:16:43.030 10:54:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:43.030 10:54:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:43.030 10:54:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:43.030 10:54:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:43.030 10:54:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:16:43.030 10:54:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:43.030 10:54:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:16:43.030 10:54:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:43.030 10:54:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:16:43.030 10:54:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:43.030 10:54:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:16:43.030 10:54:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:43.030 10:54:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:43.030 rmmod nvme_tcp 00:16:43.030 rmmod nvme_fabrics 00:16:43.030 rmmod nvme_keyring 00:16:43.030 10:55:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:43.030 10:55:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:16:43.030 10:55:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:16:43.030 10:55:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2070826 ']' 00:16:43.030 10:55:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2070826 00:16:43.030 10:55:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 2070826 ']' 00:16:43.030 10:55:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 2070826 00:16:43.291 10:55:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:16:43.291 10:55:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:43.291 10:55:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2070826 00:16:43.291 10:55:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:43.291 10:55:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:43.291 10:55:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2070826' 00:16:43.291 killing process with pid 2070826 00:16:43.291 10:55:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 2070826 00:16:43.291 10:55:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 2070826 00:16:43.291 10:55:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:43.291 10:55:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:43.291 10:55:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:43.291 10:55:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:43.291 10:55:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:43.291 10:55:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.291 10:55:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.291 10:55:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.838 10:55:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:45.838 00:16:45.838 real 0m17.566s 00:16:45.838 user 0m48.114s 00:16:45.838 sys 0m6.355s 00:16:45.838 10:55:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:45.838 10:55:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:45.838 ************************************ 00:16:45.838 END TEST nvmf_nmic 00:16:45.838 ************************************ 00:16:45.838 10:55:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:45.838 10:55:02 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:45.838 10:55:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:45.838 10:55:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:45.838 10:55:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:45.838 ************************************ 00:16:45.838 START TEST nvmf_fio_target 00:16:45.838 ************************************ 00:16:45.838 10:55:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:45.838 * Looking for test storage... 00:16:45.838 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:45.838 10:55:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:45.838 10:55:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:16:45.838 10:55:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:45.838 10:55:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:45.838 10:55:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:45.838 10:55:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:45.838 10:55:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:45.838 10:55:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:45.838 10:55:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:45.838 10:55:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:45.838 10:55:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:45.838 10:55:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:45.838 10:55:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:45.838 10:55:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:45.838 10:55:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:45.838 10:55:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:45.838 10:55:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:45.838 10:55:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:45.838 10:55:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:45.838 10:55:02 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:45.838 10:55:02 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:45.838 10:55:02 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:45.838 10:55:02 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.839 10:55:02 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.839 10:55:02 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.839 10:55:02 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:16:45.839 10:55:02 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.839 10:55:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:16:45.839 10:55:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:45.839 10:55:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:45.839 10:55:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:45.839 10:55:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:45.839 10:55:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:45.839 10:55:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:45.839 10:55:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:45.839 10:55:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:45.839 10:55:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:45.839 10:55:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:45.839 10:55:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:45.839 10:55:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:16:45.839 10:55:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:45.839 10:55:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:45.839 10:55:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:45.839 10:55:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:45.839 10:55:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:45.839 10:55:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.839 10:55:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:45.839 10:55:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.839 10:55:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:45.839 10:55:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:45.839 10:55:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:45.839 10:55:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:53.985 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:53.985 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:53.985 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:53.985 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:53.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:53.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.559 ms 00:16:53.985 00:16:53.985 --- 10.0.0.2 ping statistics --- 00:16:53.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.985 rtt min/avg/max/mdev = 0.559/0.559/0.559/0.000 ms 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:53.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:53.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.371 ms 00:16:53.985 00:16:53.985 --- 10.0.0.1 ping statistics --- 00:16:53.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.985 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:53.985 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:53.986 10:55:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:53.986 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:53.986 10:55:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:53.986 10:55:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.986 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2076690 00:16:53.986 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2076690 00:16:53.986 10:55:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:53.986 10:55:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 2076690 ']' 00:16:53.986 10:55:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.986 10:55:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:53.986 10:55:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.986 10:55:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:53.986 10:55:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.986 [2024-07-12 10:55:09.885911] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:53.986 [2024-07-12 10:55:09.885972] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:53.986 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.986 [2024-07-12 10:55:09.971576] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:53.986 [2024-07-12 10:55:10.074172] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:53.986 [2024-07-12 10:55:10.074231] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:53.986 [2024-07-12 10:55:10.074239] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:53.986 [2024-07-12 10:55:10.074246] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:53.986 [2024-07-12 10:55:10.074253] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:53.986 [2024-07-12 10:55:10.074411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.986 [2024-07-12 10:55:10.074565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:53.986 [2024-07-12 10:55:10.074729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.986 [2024-07-12 10:55:10.074730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:53.986 10:55:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:53.986 10:55:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:16:53.986 10:55:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:53.986 10:55:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:53.986 10:55:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.986 10:55:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:53.986 10:55:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:53.986 [2024-07-12 10:55:10.883056] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:53.986 10:55:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:54.246 10:55:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:54.246 10:55:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:54.508 10:55:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:54.508 10:55:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:54.770 10:55:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:54.770 10:55:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:54.770 10:55:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:54.770 10:55:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:55.032 10:55:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:55.293 10:55:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:55.293 10:55:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:55.553 10:55:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:55.553 10:55:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:55.553 10:55:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:55.553 10:55:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:55.814 10:55:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:56.075 10:55:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:56.075 10:55:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:56.075 10:55:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:56.075 10:55:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:56.336 10:55:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:56.596 [2024-07-12 10:55:13.394012] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:56.596 10:55:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:56.857 10:55:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:56.857 10:55:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:58.770 10:55:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:58.770 10:55:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:16:58.770 10:55:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:58.770 10:55:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:16:58.770 10:55:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:16:58.770 10:55:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:17:00.705 10:55:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:00.705 10:55:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:00.705 10:55:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:00.705 10:55:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:17:00.705 10:55:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:00.705 10:55:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:17:00.705 10:55:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:00.705 [global] 00:17:00.705 thread=1 00:17:00.705 invalidate=1 00:17:00.705 rw=write 00:17:00.705 time_based=1 00:17:00.705 runtime=1 00:17:00.705 ioengine=libaio 00:17:00.705 direct=1 00:17:00.705 bs=4096 00:17:00.705 iodepth=1 00:17:00.705 norandommap=0 00:17:00.705 numjobs=1 00:17:00.705 00:17:00.705 verify_dump=1 00:17:00.705 verify_backlog=512 00:17:00.705 verify_state_save=0 00:17:00.705 do_verify=1 00:17:00.705 verify=crc32c-intel 00:17:00.705 [job0] 00:17:00.705 filename=/dev/nvme0n1 00:17:00.705 [job1] 00:17:00.705 filename=/dev/nvme0n2 00:17:00.705 [job2] 00:17:00.705 filename=/dev/nvme0n3 00:17:00.705 [job3] 00:17:00.705 filename=/dev/nvme0n4 00:17:00.705 Could not set queue depth (nvme0n1) 00:17:00.705 Could not set queue depth (nvme0n2) 00:17:00.705 Could not set queue depth (nvme0n3) 00:17:00.705 Could not set queue depth (nvme0n4) 00:17:00.971 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:00.971 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:00.971 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:00.971 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:00.971 fio-3.35 00:17:00.971 Starting 4 threads 00:17:02.353 00:17:02.353 job0: (groupid=0, jobs=1): err= 0: pid=2078595: Fri Jul 12 10:55:19 2024 00:17:02.353 read: IOPS=15, BW=63.1KiB/s (64.6kB/s)(64.0KiB/1015msec) 00:17:02.353 slat (nsec): min=11345, max=26128, avg=24924.69, stdev=3626.75 00:17:02.353 clat (usec): min=40904, max=42004, avg=41554.97, stdev=473.51 00:17:02.353 lat (usec): min=40930, max=42029, avg=41579.90, stdev=474.04 00:17:02.353 clat percentiles (usec): 00:17:02.353 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:17:02.353 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:17:02.353 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:02.353 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:02.353 | 99.99th=[42206] 00:17:02.353 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:17:02.353 slat (usec): min=9, max=41591, avg=125.59, stdev=1873.47 00:17:02.353 clat (usec): min=335, max=767, avg=549.07, stdev=72.55 00:17:02.353 lat (usec): min=346, max=42174, avg=674.66, stdev=1876.44 00:17:02.353 clat percentiles (usec): 00:17:02.353 | 1.00th=[ 367], 5.00th=[ 420], 10.00th=[ 449], 20.00th=[ 486], 00:17:02.353 | 30.00th=[ 515], 40.00th=[ 545], 50.00th=[ 562], 60.00th=[ 570], 00:17:02.353 | 70.00th=[ 586], 80.00th=[ 611], 90.00th=[ 627], 95.00th=[ 652], 00:17:02.353 | 99.00th=[ 717], 99.50th=[ 742], 99.90th=[ 766], 99.95th=[ 766], 00:17:02.353 | 99.99th=[ 766] 00:17:02.353 bw ( KiB/s): min= 4096, max= 4096, per=41.52%, avg=4096.00, stdev= 0.00, samples=1 00:17:02.353 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:02.353 lat (usec) : 500=24.05%, 750=72.54%, 1000=0.38% 00:17:02.353 lat (msec) : 50=3.03% 00:17:02.353 cpu : usr=0.79%, sys=1.28%, ctx=533, majf=0, minf=1 00:17:02.353 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:02.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:02.353 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:02.353 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:02.353 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:02.353 job1: (groupid=0, jobs=1): err= 0: pid=2078596: Fri Jul 12 10:55:19 2024 00:17:02.353 read: IOPS=17, BW=69.4KiB/s (71.0kB/s)(72.0KiB/1038msec) 00:17:02.353 slat (nsec): min=25119, max=25720, avg=25422.61, stdev=190.88 00:17:02.353 clat (usec): min=40927, max=42013, avg=41439.16, stdev=490.96 00:17:02.353 lat (usec): min=40952, max=42038, avg=41464.58, stdev=490.93 00:17:02.353 clat percentiles (usec): 00:17:02.353 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:17:02.353 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:17:02.353 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:17:02.353 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:02.353 | 99.99th=[42206] 00:17:02.353 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:17:02.353 slat (nsec): min=9502, max=52518, avg=28010.45, stdev=9526.00 00:17:02.353 clat (usec): min=189, max=770, avg=534.95, stdev=79.41 00:17:02.353 lat (usec): min=200, max=780, avg=562.96, stdev=81.65 00:17:02.353 clat percentiles (usec): 00:17:02.353 | 1.00th=[ 343], 5.00th=[ 396], 10.00th=[ 429], 20.00th=[ 469], 00:17:02.353 | 30.00th=[ 490], 40.00th=[ 529], 50.00th=[ 545], 60.00th=[ 562], 00:17:02.353 | 70.00th=[ 578], 80.00th=[ 603], 90.00th=[ 627], 95.00th=[ 652], 00:17:02.353 | 99.00th=[ 693], 99.50th=[ 709], 99.90th=[ 775], 99.95th=[ 775], 00:17:02.353 | 99.99th=[ 775] 00:17:02.353 bw ( KiB/s): min= 4096, max= 4096, per=41.52%, avg=4096.00, stdev= 0.00, samples=1 00:17:02.353 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:02.353 lat (usec) : 250=0.38%, 500=30.94%, 750=65.09%, 1000=0.19% 00:17:02.353 lat (msec) : 50=3.40% 00:17:02.353 cpu : usr=0.87%, sys=1.16%, ctx=530, majf=0, minf=1 00:17:02.353 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:02.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:02.353 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:02.353 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:02.353 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:02.353 job2: (groupid=0, jobs=1): err= 0: pid=2078597: Fri Jul 12 10:55:19 2024 00:17:02.353 read: IOPS=574, BW=2298KiB/s (2353kB/s)(2300KiB/1001msec) 00:17:02.353 slat (nsec): min=6541, max=59394, avg=22268.76, stdev=7021.50 00:17:02.353 clat (usec): min=420, max=1014, avg=736.43, stdev=94.46 00:17:02.353 lat (usec): min=427, max=1039, avg=758.70, stdev=96.70 00:17:02.353 clat percentiles (usec): 00:17:02.353 | 1.00th=[ 469], 5.00th=[ 570], 10.00th=[ 603], 20.00th=[ 660], 00:17:02.353 | 30.00th=[ 693], 40.00th=[ 725], 50.00th=[ 750], 60.00th=[ 775], 00:17:02.353 | 70.00th=[ 791], 80.00th=[ 816], 90.00th=[ 840], 95.00th=[ 865], 00:17:02.353 | 99.00th=[ 938], 99.50th=[ 955], 99.90th=[ 1012], 99.95th=[ 1012], 00:17:02.353 | 99.99th=[ 1012] 00:17:02.353 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:17:02.353 slat (nsec): min=9386, max=84792, avg=29700.64, stdev=8075.80 00:17:02.353 clat (usec): min=208, max=872, avg=509.57, stdev=106.92 00:17:02.353 lat (usec): min=219, max=903, avg=539.27, stdev=109.20 00:17:02.353 clat percentiles (usec): 00:17:02.353 | 1.00th=[ 265], 5.00th=[ 343], 10.00th=[ 375], 20.00th=[ 412], 00:17:02.353 | 30.00th=[ 457], 40.00th=[ 482], 50.00th=[ 510], 60.00th=[ 537], 00:17:02.353 | 70.00th=[ 570], 80.00th=[ 611], 90.00th=[ 644], 95.00th=[ 676], 00:17:02.353 | 99.00th=[ 734], 99.50th=[ 783], 99.90th=[ 816], 99.95th=[ 873], 00:17:02.353 | 99.99th=[ 873] 00:17:02.353 bw ( KiB/s): min= 4096, max= 4096, per=41.52%, avg=4096.00, stdev= 0.00, samples=1 00:17:02.353 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:02.353 lat (usec) : 250=0.19%, 500=30.33%, 750=50.59%, 1000=18.82% 00:17:02.353 lat (msec) : 2=0.06% 00:17:02.353 cpu : usr=3.00%, sys=3.70%, ctx=1599, majf=0, minf=1 00:17:02.353 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:02.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:02.353 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:02.353 issued rwts: total=575,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:02.353 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:02.353 job3: (groupid=0, jobs=1): err= 0: pid=2078598: Fri Jul 12 10:55:19 2024 00:17:02.353 read: IOPS=17, BW=70.8KiB/s (72.5kB/s)(72.0KiB/1017msec) 00:17:02.353 slat (nsec): min=14336, max=26038, avg=24160.33, stdev=3514.71 00:17:02.353 clat (usec): min=41906, max=42031, avg=41968.94, stdev=38.52 00:17:02.353 lat (usec): min=41932, max=42052, avg=41993.10, stdev=37.58 00:17:02.353 clat percentiles (usec): 00:17:02.353 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:17:02.353 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:02.353 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:02.353 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:02.353 | 99.99th=[42206] 00:17:02.353 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:17:02.353 slat (usec): min=5, max=7342, avg=36.97, stdev=342.54 00:17:02.353 clat (usec): min=164, max=779, avg=466.45, stdev=122.96 00:17:02.353 lat (usec): min=170, max=8084, avg=503.43, stdev=374.48 00:17:02.353 clat percentiles (usec): 00:17:02.353 | 1.00th=[ 190], 5.00th=[ 265], 10.00th=[ 293], 20.00th=[ 371], 00:17:02.353 | 30.00th=[ 408], 40.00th=[ 429], 50.00th=[ 469], 60.00th=[ 502], 00:17:02.353 | 70.00th=[ 537], 80.00th=[ 570], 90.00th=[ 627], 95.00th=[ 676], 00:17:02.353 | 99.00th=[ 725], 99.50th=[ 742], 99.90th=[ 783], 99.95th=[ 783], 00:17:02.353 | 99.99th=[ 783] 00:17:02.353 bw ( KiB/s): min= 4096, max= 4096, per=41.52%, avg=4096.00, stdev= 0.00, samples=1 00:17:02.353 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:02.353 lat (usec) : 250=3.40%, 500=54.34%, 750=38.49%, 1000=0.38% 00:17:02.353 lat (msec) : 50=3.40% 00:17:02.353 cpu : usr=0.39%, sys=0.79%, ctx=535, majf=0, minf=1 00:17:02.353 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:02.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:02.353 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:02.353 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:02.353 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:02.353 00:17:02.353 Run status group 0 (all jobs): 00:17:02.354 READ: bw=2416KiB/s (2474kB/s), 63.1KiB/s-2298KiB/s (64.6kB/s-2353kB/s), io=2508KiB (2568kB), run=1001-1038msec 00:17:02.354 WRITE: bw=9865KiB/s (10.1MB/s), 1973KiB/s-4092KiB/s (2020kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1038msec 00:17:02.354 00:17:02.354 Disk stats (read/write): 00:17:02.354 nvme0n1: ios=62/512, merge=0/0, ticks=1304/268, in_queue=1572, util=86.77% 00:17:02.354 nvme0n2: ios=62/512, merge=0/0, ticks=590/265, in_queue=855, util=85.71% 00:17:02.354 nvme0n3: ios=568/715, merge=0/0, ticks=434/344, in_queue=778, util=90.31% 00:17:02.354 nvme0n4: ios=72/512, merge=0/0, ticks=718/237, in_queue=955, util=99.55% 00:17:02.354 10:55:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:02.354 [global] 00:17:02.354 thread=1 00:17:02.354 invalidate=1 00:17:02.354 rw=randwrite 00:17:02.354 time_based=1 00:17:02.354 runtime=1 00:17:02.354 ioengine=libaio 00:17:02.354 direct=1 00:17:02.354 bs=4096 00:17:02.354 iodepth=1 00:17:02.354 norandommap=0 00:17:02.354 numjobs=1 00:17:02.354 00:17:02.354 verify_dump=1 00:17:02.354 verify_backlog=512 00:17:02.354 verify_state_save=0 00:17:02.354 do_verify=1 00:17:02.354 verify=crc32c-intel 00:17:02.354 [job0] 00:17:02.354 filename=/dev/nvme0n1 00:17:02.354 [job1] 00:17:02.354 filename=/dev/nvme0n2 00:17:02.354 [job2] 00:17:02.354 filename=/dev/nvme0n3 00:17:02.354 [job3] 00:17:02.354 filename=/dev/nvme0n4 00:17:02.354 Could not set queue depth (nvme0n1) 00:17:02.354 Could not set queue depth (nvme0n2) 00:17:02.354 Could not set queue depth (nvme0n3) 00:17:02.354 Could not set queue depth (nvme0n4) 00:17:02.625 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:02.625 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:02.625 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:02.625 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:02.625 fio-3.35 00:17:02.625 Starting 4 threads 00:17:04.007 00:17:04.008 job0: (groupid=0, jobs=1): err= 0: pid=2079121: Fri Jul 12 10:55:20 2024 00:17:04.008 read: IOPS=14, BW=58.4KiB/s (59.8kB/s)(60.0KiB/1027msec) 00:17:04.008 slat (nsec): min=26235, max=31173, avg=27007.20, stdev=1176.71 00:17:04.008 clat (usec): min=41084, max=42176, avg=41916.35, stdev=251.60 00:17:04.008 lat (usec): min=41111, max=42202, avg=41943.36, stdev=251.59 00:17:04.008 clat percentiles (usec): 00:17:04.008 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:17:04.008 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:17:04.008 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:04.008 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:04.008 | 99.99th=[42206] 00:17:04.008 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:17:04.008 slat (nsec): min=8515, max=58678, avg=29934.29, stdev=10350.68 00:17:04.008 clat (usec): min=441, max=1375, avg=738.66, stdev=108.42 00:17:04.008 lat (usec): min=468, max=1386, avg=768.59, stdev=111.31 00:17:04.008 clat percentiles (usec): 00:17:04.008 | 1.00th=[ 457], 5.00th=[ 537], 10.00th=[ 594], 20.00th=[ 660], 00:17:04.008 | 30.00th=[ 693], 40.00th=[ 717], 50.00th=[ 742], 60.00th=[ 775], 00:17:04.008 | 70.00th=[ 807], 80.00th=[ 832], 90.00th=[ 865], 95.00th=[ 889], 00:17:04.008 | 99.00th=[ 938], 99.50th=[ 955], 99.90th=[ 1369], 99.95th=[ 1369], 00:17:04.008 | 99.99th=[ 1369] 00:17:04.008 bw ( KiB/s): min= 4096, max= 4096, per=51.35%, avg=4096.00, stdev= 0.00, samples=1 00:17:04.008 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:04.008 lat (usec) : 500=3.42%, 750=47.25%, 1000=46.30% 00:17:04.008 lat (msec) : 2=0.19%, 50=2.85% 00:17:04.008 cpu : usr=1.07%, sys=1.85%, ctx=529, majf=0, minf=1 00:17:04.008 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:04.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:04.008 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:04.008 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:04.008 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:04.008 job1: (groupid=0, jobs=1): err= 0: pid=2079122: Fri Jul 12 10:55:20 2024 00:17:04.008 read: IOPS=346, BW=1387KiB/s (1420kB/s)(1388KiB/1001msec) 00:17:04.008 slat (nsec): min=7352, max=44194, avg=25324.80, stdev=3665.08 00:17:04.008 clat (usec): min=823, max=42016, avg=1736.51, stdev=4870.36 00:17:04.008 lat (usec): min=833, max=42043, avg=1761.83, stdev=4870.58 00:17:04.008 clat percentiles (usec): 00:17:04.008 | 1.00th=[ 898], 5.00th=[ 955], 10.00th=[ 1004], 20.00th=[ 1074], 00:17:04.008 | 30.00th=[ 1106], 40.00th=[ 1139], 50.00th=[ 1156], 60.00th=[ 1188], 00:17:04.008 | 70.00th=[ 1205], 80.00th=[ 1237], 90.00th=[ 1287], 95.00th=[ 1303], 00:17:04.008 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:04.008 | 99.99th=[42206] 00:17:04.008 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:17:04.008 slat (nsec): min=8664, max=67037, avg=29428.14, stdev=10932.45 00:17:04.008 clat (usec): min=321, max=1366, avg=716.64, stdev=121.61 00:17:04.008 lat (usec): min=355, max=1400, avg=746.07, stdev=124.63 00:17:04.008 clat percentiles (usec): 00:17:04.008 | 1.00th=[ 429], 5.00th=[ 469], 10.00th=[ 562], 20.00th=[ 619], 00:17:04.008 | 30.00th=[ 668], 40.00th=[ 701], 50.00th=[ 734], 60.00th=[ 758], 00:17:04.008 | 70.00th=[ 783], 80.00th=[ 816], 90.00th=[ 848], 95.00th=[ 889], 00:17:04.008 | 99.00th=[ 938], 99.50th=[ 963], 99.90th=[ 1369], 99.95th=[ 1369], 00:17:04.008 | 99.99th=[ 1369] 00:17:04.008 bw ( KiB/s): min= 4096, max= 4096, per=51.35%, avg=4096.00, stdev= 0.00, samples=1 00:17:04.008 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:04.008 lat (usec) : 500=3.84%, 750=30.62%, 1000=28.64% 00:17:04.008 lat (msec) : 2=36.32%, 50=0.58% 00:17:04.008 cpu : usr=1.50%, sys=2.90%, ctx=860, majf=0, minf=1 00:17:04.008 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:04.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:04.008 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:04.008 issued rwts: total=347,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:04.008 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:04.008 job2: (groupid=0, jobs=1): err= 0: pid=2079123: Fri Jul 12 10:55:20 2024 00:17:04.008 read: IOPS=495, BW=1982KiB/s (2030kB/s)(1984KiB/1001msec) 00:17:04.008 slat (nsec): min=23903, max=59911, avg=24974.52, stdev=3616.06 00:17:04.008 clat (usec): min=682, max=1408, avg=1163.27, stdev=115.68 00:17:04.008 lat (usec): min=706, max=1433, avg=1188.24, stdev=115.46 00:17:04.008 clat percentiles (usec): 00:17:04.008 | 1.00th=[ 824], 5.00th=[ 930], 10.00th=[ 996], 20.00th=[ 1074], 00:17:04.008 | 30.00th=[ 1123], 40.00th=[ 1172], 50.00th=[ 1188], 60.00th=[ 1205], 00:17:04.008 | 70.00th=[ 1237], 80.00th=[ 1254], 90.00th=[ 1287], 95.00th=[ 1319], 00:17:04.008 | 99.00th=[ 1385], 99.50th=[ 1385], 99.90th=[ 1401], 99.95th=[ 1401], 00:17:04.008 | 99.99th=[ 1401] 00:17:04.008 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:17:04.008 slat (nsec): min=9252, max=47983, avg=27857.44, stdev=7738.67 00:17:04.008 clat (usec): min=301, max=1510, avg=759.40, stdev=121.94 00:17:04.008 lat (usec): min=331, max=1540, avg=787.26, stdev=124.84 00:17:04.008 clat percentiles (usec): 00:17:04.008 | 1.00th=[ 469], 5.00th=[ 570], 10.00th=[ 603], 20.00th=[ 676], 00:17:04.008 | 30.00th=[ 709], 40.00th=[ 734], 50.00th=[ 758], 60.00th=[ 791], 00:17:04.008 | 70.00th=[ 824], 80.00th=[ 848], 90.00th=[ 889], 95.00th=[ 922], 00:17:04.008 | 99.00th=[ 1057], 99.50th=[ 1287], 99.90th=[ 1516], 99.95th=[ 1516], 00:17:04.008 | 99.99th=[ 1516] 00:17:04.008 bw ( KiB/s): min= 4096, max= 4096, per=51.35%, avg=4096.00, stdev= 0.00, samples=1 00:17:04.008 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:04.008 lat (usec) : 500=0.99%, 750=22.72%, 1000=31.75% 00:17:04.008 lat (msec) : 2=44.54% 00:17:04.008 cpu : usr=1.40%, sys=2.90%, ctx=1008, majf=0, minf=1 00:17:04.008 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:04.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:04.008 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:04.008 issued rwts: total=496,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:04.008 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:04.008 job3: (groupid=0, jobs=1): err= 0: pid=2079124: Fri Jul 12 10:55:20 2024 00:17:04.008 read: IOPS=460, BW=1842KiB/s (1886kB/s)(1844KiB/1001msec) 00:17:04.008 slat (nsec): min=7152, max=46739, avg=26671.45, stdev=4529.22 00:17:04.008 clat (usec): min=618, max=42081, avg=1280.67, stdev=2229.01 00:17:04.008 lat (usec): min=643, max=42106, avg=1307.35, stdev=2228.99 00:17:04.008 clat percentiles (usec): 00:17:04.008 | 1.00th=[ 742], 5.00th=[ 898], 10.00th=[ 955], 20.00th=[ 1045], 00:17:04.008 | 30.00th=[ 1090], 40.00th=[ 1123], 50.00th=[ 1156], 60.00th=[ 1188], 00:17:04.008 | 70.00th=[ 1205], 80.00th=[ 1254], 90.00th=[ 1303], 95.00th=[ 1336], 00:17:04.008 | 99.00th=[ 1418], 99.50th=[ 1450], 99.90th=[42206], 99.95th=[42206], 00:17:04.008 | 99.99th=[42206] 00:17:04.008 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:17:04.008 slat (nsec): min=8992, max=54658, avg=30422.40, stdev=10182.74 00:17:04.008 clat (usec): min=358, max=1402, avg=731.14, stdev=114.50 00:17:04.008 lat (usec): min=368, max=1436, avg=761.56, stdev=119.45 00:17:04.008 clat percentiles (usec): 00:17:04.008 | 1.00th=[ 469], 5.00th=[ 537], 10.00th=[ 578], 20.00th=[ 644], 00:17:04.008 | 30.00th=[ 676], 40.00th=[ 701], 50.00th=[ 734], 60.00th=[ 766], 00:17:04.008 | 70.00th=[ 799], 80.00th=[ 832], 90.00th=[ 873], 95.00th=[ 906], 00:17:04.008 | 99.00th=[ 963], 99.50th=[ 988], 99.90th=[ 1401], 99.95th=[ 1401], 00:17:04.008 | 99.99th=[ 1401] 00:17:04.008 bw ( KiB/s): min= 4096, max= 4096, per=51.35%, avg=4096.00, stdev= 0.00, samples=1 00:17:04.008 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:04.008 lat (usec) : 500=1.64%, 750=28.06%, 1000=29.50% 00:17:04.008 lat (msec) : 2=40.60%, 50=0.21% 00:17:04.008 cpu : usr=2.10%, sys=3.60%, ctx=974, majf=0, minf=1 00:17:04.008 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:04.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:04.008 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:04.008 issued rwts: total=461,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:04.008 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:04.008 00:17:04.008 Run status group 0 (all jobs): 00:17:04.008 READ: bw=5137KiB/s (5261kB/s), 58.4KiB/s-1982KiB/s (59.8kB/s-2030kB/s), io=5276KiB (5403kB), run=1001-1027msec 00:17:04.008 WRITE: bw=7977KiB/s (8168kB/s), 1994KiB/s-2046KiB/s (2042kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1027msec 00:17:04.008 00:17:04.008 Disk stats (read/write): 00:17:04.008 nvme0n1: ios=42/512, merge=0/0, ticks=751/300, in_queue=1051, util=97.19% 00:17:04.008 nvme0n2: ios=222/512, merge=0/0, ticks=612/299, in_queue=911, util=96.84% 00:17:04.008 nvme0n3: ios=413/512, merge=0/0, ticks=459/368, in_queue=827, util=91.24% 00:17:04.008 nvme0n4: ios=380/512, merge=0/0, ticks=945/303, in_queue=1248, util=96.69% 00:17:04.008 10:55:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:04.008 [global] 00:17:04.008 thread=1 00:17:04.008 invalidate=1 00:17:04.008 rw=write 00:17:04.008 time_based=1 00:17:04.008 runtime=1 00:17:04.008 ioengine=libaio 00:17:04.008 direct=1 00:17:04.008 bs=4096 00:17:04.008 iodepth=128 00:17:04.008 norandommap=0 00:17:04.008 numjobs=1 00:17:04.008 00:17:04.008 verify_dump=1 00:17:04.008 verify_backlog=512 00:17:04.008 verify_state_save=0 00:17:04.008 do_verify=1 00:17:04.008 verify=crc32c-intel 00:17:04.008 [job0] 00:17:04.008 filename=/dev/nvme0n1 00:17:04.008 [job1] 00:17:04.008 filename=/dev/nvme0n2 00:17:04.008 [job2] 00:17:04.008 filename=/dev/nvme0n3 00:17:04.008 [job3] 00:17:04.008 filename=/dev/nvme0n4 00:17:04.008 Could not set queue depth (nvme0n1) 00:17:04.008 Could not set queue depth (nvme0n2) 00:17:04.008 Could not set queue depth (nvme0n3) 00:17:04.008 Could not set queue depth (nvme0n4) 00:17:04.268 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:04.268 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:04.268 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:04.268 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:04.268 fio-3.35 00:17:04.268 Starting 4 threads 00:17:05.655 00:17:05.655 job0: (groupid=0, jobs=1): err= 0: pid=2079646: Fri Jul 12 10:55:22 2024 00:17:05.655 read: IOPS=4472, BW=17.5MiB/s (18.3MB/s)(17.5MiB/1004msec) 00:17:05.655 slat (nsec): min=864, max=13800k, avg=100372.52, stdev=714011.28 00:17:05.655 clat (usec): min=1946, max=50654, avg=12467.47, stdev=7877.69 00:17:05.655 lat (usec): min=1950, max=50663, avg=12567.84, stdev=7944.79 00:17:05.655 clat percentiles (usec): 00:17:05.655 | 1.00th=[ 4424], 5.00th=[ 6194], 10.00th=[ 6652], 20.00th=[ 7308], 00:17:05.655 | 30.00th=[ 7963], 40.00th=[ 8848], 50.00th=[ 9896], 60.00th=[11076], 00:17:05.655 | 70.00th=[12256], 80.00th=[15795], 90.00th=[24249], 95.00th=[28967], 00:17:05.655 | 99.00th=[46400], 99.50th=[47449], 99.90th=[50594], 99.95th=[50594], 00:17:05.655 | 99.99th=[50594] 00:17:05.655 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:17:05.655 slat (nsec): min=1555, max=22984k, avg=110842.94, stdev=797738.20 00:17:05.655 clat (usec): min=1315, max=59871, avg=15458.02, stdev=10237.31 00:17:05.655 lat (usec): min=1323, max=59896, avg=15568.87, stdev=10310.24 00:17:05.655 clat percentiles (usec): 00:17:05.655 | 1.00th=[ 3949], 5.00th=[ 5800], 10.00th=[ 6325], 20.00th=[ 6915], 00:17:05.655 | 30.00th=[ 7963], 40.00th=[ 9110], 50.00th=[12387], 60.00th=[15139], 00:17:05.655 | 70.00th=[18220], 80.00th=[22414], 90.00th=[30540], 95.00th=[36439], 00:17:05.655 | 99.00th=[48497], 99.50th=[50594], 99.90th=[51643], 99.95th=[51643], 00:17:05.655 | 99.99th=[60031] 00:17:05.655 bw ( KiB/s): min=17728, max=19136, per=19.31%, avg=18432.00, stdev=995.61, samples=2 00:17:05.655 iops : min= 4432, max= 4784, avg=4608.00, stdev=248.90, samples=2 00:17:05.655 lat (msec) : 2=0.21%, 4=0.75%, 10=46.48%, 20=33.57%, 50=18.36% 00:17:05.655 lat (msec) : 100=0.64% 00:17:05.655 cpu : usr=3.39%, sys=4.09%, ctx=422, majf=0, minf=1 00:17:05.655 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:05.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:05.655 issued rwts: total=4490,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:05.655 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:05.655 job1: (groupid=0, jobs=1): err= 0: pid=2079647: Fri Jul 12 10:55:22 2024 00:17:05.655 read: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec) 00:17:05.655 slat (nsec): min=883, max=21007k, avg=69471.00, stdev=581797.09 00:17:05.655 clat (usec): min=3591, max=69424, avg=9361.47, stdev=7403.85 00:17:05.655 lat (usec): min=3601, max=69450, avg=9430.95, stdev=7449.11 00:17:05.655 clat percentiles (usec): 00:17:05.655 | 1.00th=[ 3687], 5.00th=[ 5145], 10.00th=[ 5932], 20.00th=[ 6849], 00:17:05.655 | 30.00th=[ 7111], 40.00th=[ 7242], 50.00th=[ 7373], 60.00th=[ 7570], 00:17:05.655 | 70.00th=[ 7832], 80.00th=[ 9634], 90.00th=[13173], 95.00th=[16319], 00:17:05.655 | 99.00th=[48497], 99.50th=[57934], 99.90th=[57934], 99.95th=[57934], 00:17:05.655 | 99.99th=[69731] 00:17:05.655 write: IOPS=6926, BW=27.1MiB/s (28.4MB/s)(27.1MiB/1003msec); 0 zone resets 00:17:05.655 slat (nsec): min=1560, max=33408k, avg=72099.09, stdev=592735.11 00:17:05.655 clat (usec): min=650, max=42736, avg=8665.64, stdev=4159.14 00:17:05.655 lat (usec): min=3065, max=42747, avg=8737.74, stdev=4205.32 00:17:05.655 clat percentiles (usec): 00:17:05.655 | 1.00th=[ 3818], 5.00th=[ 4555], 10.00th=[ 6128], 20.00th=[ 6915], 00:17:05.655 | 30.00th=[ 7046], 40.00th=[ 7242], 50.00th=[ 7439], 60.00th=[ 7701], 00:17:05.655 | 70.00th=[ 8160], 80.00th=[ 8979], 90.00th=[13566], 95.00th=[18220], 00:17:05.655 | 99.00th=[25560], 99.50th=[30802], 99.90th=[39584], 99.95th=[42730], 00:17:05.655 | 99.99th=[42730] 00:17:05.655 bw ( KiB/s): min=23680, max=30872, per=28.57%, avg=27276.00, stdev=5085.51, samples=2 00:17:05.655 iops : min= 5920, max= 7718, avg=6819.00, stdev=1271.38, samples=2 00:17:05.655 lat (usec) : 750=0.01% 00:17:05.655 lat (msec) : 4=2.27%, 10=81.62%, 20=11.94%, 50=3.84%, 100=0.32% 00:17:05.655 cpu : usr=4.59%, sys=6.19%, ctx=610, majf=0, minf=1 00:17:05.655 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:17:05.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:05.655 issued rwts: total=6656,6947,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:05.655 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:05.655 job2: (groupid=0, jobs=1): err= 0: pid=2079650: Fri Jul 12 10:55:22 2024 00:17:05.655 read: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec) 00:17:05.655 slat (nsec): min=905, max=19306k, avg=95213.40, stdev=600709.57 00:17:05.655 clat (usec): min=6449, max=43751, avg=11982.60, stdev=5386.72 00:17:05.655 lat (usec): min=6456, max=43758, avg=12077.82, stdev=5405.34 00:17:05.655 clat percentiles (usec): 00:17:05.655 | 1.00th=[ 7242], 5.00th=[ 7963], 10.00th=[ 8586], 20.00th=[ 9241], 00:17:05.655 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10028], 00:17:05.655 | 70.00th=[10290], 80.00th=[14877], 90.00th=[19792], 95.00th=[22938], 00:17:05.655 | 99.00th=[28967], 99.50th=[40109], 99.90th=[43779], 99.95th=[43779], 00:17:05.655 | 99.99th=[43779] 00:17:05.655 write: IOPS=5502, BW=21.5MiB/s (22.5MB/s)(21.6MiB/1006msec); 0 zone resets 00:17:05.655 slat (nsec): min=1583, max=19166k, avg=89807.97, stdev=558278.98 00:17:05.655 clat (usec): min=2036, max=71250, avg=11663.13, stdev=9632.62 00:17:05.655 lat (usec): min=5358, max=72474, avg=11752.94, stdev=9686.92 00:17:05.655 clat percentiles (usec): 00:17:05.655 | 1.00th=[ 6128], 5.00th=[ 6849], 10.00th=[ 7046], 20.00th=[ 7504], 00:17:05.655 | 30.00th=[ 7832], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 9110], 00:17:05.655 | 70.00th=[ 9634], 80.00th=[12387], 90.00th=[21103], 95.00th=[28705], 00:17:05.655 | 99.00th=[65274], 99.50th=[66323], 99.90th=[70779], 99.95th=[70779], 00:17:05.655 | 99.99th=[70779] 00:17:05.655 bw ( KiB/s): min=19376, max=23888, per=22.66%, avg=21632.00, stdev=3190.47, samples=2 00:17:05.655 iops : min= 4844, max= 5972, avg=5408.00, stdev=797.62, samples=2 00:17:05.655 lat (msec) : 4=0.01%, 10=67.39%, 20=22.86%, 50=8.64%, 100=1.10% 00:17:05.655 cpu : usr=2.99%, sys=2.59%, ctx=730, majf=0, minf=1 00:17:05.655 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:05.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:05.655 issued rwts: total=5120,5536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:05.655 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:05.656 job3: (groupid=0, jobs=1): err= 0: pid=2079651: Fri Jul 12 10:55:22 2024 00:17:05.656 read: IOPS=6616, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1006msec) 00:17:05.656 slat (nsec): min=958, max=7935.2k, avg=75748.62, stdev=537804.36 00:17:05.656 clat (usec): min=3875, max=26352, avg=10179.50, stdev=2545.03 00:17:05.656 lat (usec): min=3883, max=26355, avg=10255.25, stdev=2569.60 00:17:05.656 clat percentiles (usec): 00:17:05.656 | 1.00th=[ 5932], 5.00th=[ 6783], 10.00th=[ 7373], 20.00th=[ 8094], 00:17:05.656 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9765], 60.00th=[10683], 00:17:05.656 | 70.00th=[11076], 80.00th=[12125], 90.00th=[13829], 95.00th=[14615], 00:17:05.656 | 99.00th=[16909], 99.50th=[17171], 99.90th=[22414], 99.95th=[26346], 00:17:05.656 | 99.99th=[26346] 00:17:05.656 write: IOPS=6879, BW=26.9MiB/s (28.2MB/s)(27.0MiB/1006msec); 0 zone resets 00:17:05.656 slat (nsec): min=1691, max=6513.9k, avg=66030.66, stdev=423548.37 00:17:05.656 clat (usec): min=1156, max=27491, avg=8552.17, stdev=3362.58 00:17:05.656 lat (usec): min=1165, max=27493, avg=8618.20, stdev=3376.09 00:17:05.656 clat percentiles (usec): 00:17:05.656 | 1.00th=[ 3064], 5.00th=[ 5145], 10.00th=[ 5800], 20.00th=[ 6259], 00:17:05.656 | 30.00th=[ 6849], 40.00th=[ 7439], 50.00th=[ 7832], 60.00th=[ 8094], 00:17:05.656 | 70.00th=[ 8979], 80.00th=[10421], 90.00th=[12256], 95.00th=[14615], 00:17:05.656 | 99.00th=[22938], 99.50th=[25297], 99.90th=[27395], 99.95th=[27395], 00:17:05.656 | 99.99th=[27395] 00:17:05.656 bw ( KiB/s): min=25672, max=28672, per=28.46%, avg=27172.00, stdev=2121.32, samples=2 00:17:05.656 iops : min= 6418, max= 7168, avg=6793.00, stdev=530.33, samples=2 00:17:05.656 lat (msec) : 2=0.01%, 4=1.11%, 10=64.20%, 20=33.51%, 50=1.16% 00:17:05.656 cpu : usr=4.48%, sys=8.46%, ctx=478, majf=0, minf=1 00:17:05.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:17:05.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:05.656 issued rwts: total=6656,6921,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:05.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:05.656 00:17:05.656 Run status group 0 (all jobs): 00:17:05.656 READ: bw=89.0MiB/s (93.3MB/s), 17.5MiB/s-25.9MiB/s (18.3MB/s-27.2MB/s), io=89.5MiB (93.9MB), run=1003-1006msec 00:17:05.656 WRITE: bw=93.2MiB/s (97.8MB/s), 17.9MiB/s-27.1MiB/s (18.8MB/s-28.4MB/s), io=93.8MiB (98.4MB), run=1003-1006msec 00:17:05.656 00:17:05.656 Disk stats (read/write): 00:17:05.656 nvme0n1: ios=3609/3591, merge=0/0, ticks=18602/20760, in_queue=39362, util=87.17% 00:17:05.656 nvme0n2: ios=5694/6144, merge=0/0, ticks=24990/21957, in_queue=46947, util=98.78% 00:17:05.656 nvme0n3: ios=4666/4640, merge=0/0, ticks=14745/13445, in_queue=28190, util=96.00% 00:17:05.656 nvme0n4: ios=5495/5632, merge=0/0, ticks=53722/48032, in_queue=101754, util=99.68% 00:17:05.656 10:55:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:05.656 [global] 00:17:05.656 thread=1 00:17:05.656 invalidate=1 00:17:05.656 rw=randwrite 00:17:05.656 time_based=1 00:17:05.656 runtime=1 00:17:05.656 ioengine=libaio 00:17:05.656 direct=1 00:17:05.656 bs=4096 00:17:05.656 iodepth=128 00:17:05.656 norandommap=0 00:17:05.656 numjobs=1 00:17:05.656 00:17:05.656 verify_dump=1 00:17:05.656 verify_backlog=512 00:17:05.656 verify_state_save=0 00:17:05.656 do_verify=1 00:17:05.656 verify=crc32c-intel 00:17:05.656 [job0] 00:17:05.656 filename=/dev/nvme0n1 00:17:05.656 [job1] 00:17:05.656 filename=/dev/nvme0n2 00:17:05.656 [job2] 00:17:05.656 filename=/dev/nvme0n3 00:17:05.656 [job3] 00:17:05.656 filename=/dev/nvme0n4 00:17:05.656 Could not set queue depth (nvme0n1) 00:17:05.656 Could not set queue depth (nvme0n2) 00:17:05.656 Could not set queue depth (nvme0n3) 00:17:05.656 Could not set queue depth (nvme0n4) 00:17:05.917 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:05.917 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:05.917 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:05.917 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:05.917 fio-3.35 00:17:05.917 Starting 4 threads 00:17:07.302 00:17:07.302 job0: (groupid=0, jobs=1): err= 0: pid=2080169: Fri Jul 12 10:55:24 2024 00:17:07.302 read: IOPS=5360, BW=20.9MiB/s (22.0MB/s)(21.0MiB/1005msec) 00:17:07.302 slat (nsec): min=861, max=10641k, avg=93567.18, stdev=579709.77 00:17:07.302 clat (usec): min=1460, max=44476, avg=11145.58, stdev=5199.26 00:17:07.302 lat (usec): min=4071, max=44499, avg=11239.15, stdev=5253.66 00:17:07.302 clat percentiles (usec): 00:17:07.302 | 1.00th=[ 4555], 5.00th=[ 6063], 10.00th=[ 7111], 20.00th=[ 8160], 00:17:07.302 | 30.00th=[ 9110], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10683], 00:17:07.302 | 70.00th=[11600], 80.00th=[12780], 90.00th=[14746], 95.00th=[19006], 00:17:07.302 | 99.00th=[35914], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:07.302 | 99.99th=[44303] 00:17:07.302 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:17:07.302 slat (nsec): min=1453, max=5820.9k, avg=83901.32, stdev=385618.86 00:17:07.302 clat (usec): min=3933, max=36194, avg=11914.99, stdev=4252.08 00:17:07.302 lat (usec): min=3940, max=36201, avg=11998.89, stdev=4271.69 00:17:07.302 clat percentiles (usec): 00:17:07.302 | 1.00th=[ 5342], 5.00th=[ 6390], 10.00th=[ 7046], 20.00th=[ 7963], 00:17:07.302 | 30.00th=[ 9241], 40.00th=[10814], 50.00th=[11863], 60.00th=[12518], 00:17:07.302 | 70.00th=[13435], 80.00th=[14615], 90.00th=[17171], 95.00th=[18744], 00:17:07.302 | 99.00th=[26346], 99.50th=[30802], 99.90th=[35390], 99.95th=[36439], 00:17:07.302 | 99.99th=[36439] 00:17:07.302 bw ( KiB/s): min=22136, max=22920, per=25.87%, avg=22528.00, stdev=554.37, samples=2 00:17:07.302 iops : min= 5534, max= 5730, avg=5632.00, stdev=138.59, samples=2 00:17:07.302 lat (msec) : 2=0.01%, 4=0.11%, 10=40.89%, 20=54.90%, 50=4.09% 00:17:07.302 cpu : usr=3.09%, sys=5.58%, ctx=645, majf=0, minf=2 00:17:07.302 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:17:07.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:07.302 issued rwts: total=5387,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:07.302 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:07.302 job1: (groupid=0, jobs=1): err= 0: pid=2080170: Fri Jul 12 10:55:24 2024 00:17:07.302 read: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec) 00:17:07.302 slat (nsec): min=878, max=8004.0k, avg=89392.32, stdev=527647.36 00:17:07.302 clat (usec): min=1563, max=25046, avg=11827.20, stdev=3990.80 00:17:07.302 lat (usec): min=1565, max=25053, avg=11916.59, stdev=4004.82 00:17:07.302 clat percentiles (usec): 00:17:07.302 | 1.00th=[ 3687], 5.00th=[ 5932], 10.00th=[ 6652], 20.00th=[ 7439], 00:17:07.302 | 30.00th=[ 8586], 40.00th=[11076], 50.00th=[12387], 60.00th=[13304], 00:17:07.302 | 70.00th=[13960], 80.00th=[15270], 90.00th=[16909], 95.00th=[18220], 00:17:07.302 | 99.00th=[20317], 99.50th=[20841], 99.90th=[22414], 99.95th=[22938], 00:17:07.302 | 99.99th=[25035] 00:17:07.302 write: IOPS=6089, BW=23.8MiB/s (24.9MB/s)(23.9MiB/1006msec); 0 zone resets 00:17:07.302 slat (nsec): min=1453, max=10997k, avg=71950.70, stdev=389600.83 00:17:07.302 clat (usec): min=767, max=26056, avg=9968.64, stdev=4005.80 00:17:07.302 lat (usec): min=770, max=26064, avg=10040.59, stdev=4023.53 00:17:07.302 clat percentiles (usec): 00:17:07.302 | 1.00th=[ 1811], 5.00th=[ 3884], 10.00th=[ 4686], 20.00th=[ 5932], 00:17:07.302 | 30.00th=[ 7570], 40.00th=[ 9241], 50.00th=[10552], 60.00th=[11600], 00:17:07.302 | 70.00th=[11994], 80.00th=[12387], 90.00th=[13829], 95.00th=[17171], 00:17:07.302 | 99.00th=[21103], 99.50th=[22938], 99.90th=[26084], 99.95th=[26084], 00:17:07.302 | 99.99th=[26084] 00:17:07.302 bw ( KiB/s): min=23312, max=24680, per=27.56%, avg=23996.00, stdev=967.32, samples=2 00:17:07.302 iops : min= 5828, max= 6170, avg=5999.00, stdev=241.83, samples=2 00:17:07.302 lat (usec) : 1000=0.09% 00:17:07.302 lat (msec) : 2=0.56%, 4=3.04%, 10=36.38%, 20=58.40%, 50=1.51% 00:17:07.302 cpu : usr=3.78%, sys=5.77%, ctx=639, majf=0, minf=1 00:17:07.302 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:07.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:07.302 issued rwts: total=5632,6126,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:07.302 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:07.302 job2: (groupid=0, jobs=1): err= 0: pid=2080171: Fri Jul 12 10:55:24 2024 00:17:07.302 read: IOPS=4395, BW=17.2MiB/s (18.0MB/s)(17.3MiB/1005msec) 00:17:07.302 slat (nsec): min=966, max=11950k, avg=116328.38, stdev=630085.77 00:17:07.302 clat (usec): min=2274, max=46293, avg=14478.64, stdev=5888.44 00:17:07.302 lat (usec): min=6342, max=46302, avg=14594.97, stdev=5921.71 00:17:07.302 clat percentiles (usec): 00:17:07.302 | 1.00th=[ 7570], 5.00th=[ 9634], 10.00th=[10290], 20.00th=[11076], 00:17:07.302 | 30.00th=[11338], 40.00th=[11731], 50.00th=[12780], 60.00th=[13829], 00:17:07.302 | 70.00th=[14746], 80.00th=[16450], 90.00th=[20841], 95.00th=[24773], 00:17:07.302 | 99.00th=[43254], 99.50th=[44303], 99.90th=[45351], 99.95th=[45351], 00:17:07.302 | 99.99th=[46400] 00:17:07.302 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:17:07.302 slat (nsec): min=1577, max=11568k, avg=100312.08, stdev=495340.80 00:17:07.302 clat (usec): min=6554, max=34024, avg=13568.32, stdev=4708.07 00:17:07.302 lat (usec): min=6563, max=34033, avg=13668.64, stdev=4735.45 00:17:07.302 clat percentiles (usec): 00:17:07.302 | 1.00th=[ 7439], 5.00th=[ 8586], 10.00th=[ 9503], 20.00th=[10290], 00:17:07.302 | 30.00th=[10683], 40.00th=[11469], 50.00th=[12125], 60.00th=[13173], 00:17:07.302 | 70.00th=[14484], 80.00th=[16319], 90.00th=[20055], 95.00th=[23462], 00:17:07.302 | 99.00th=[31065], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:17:07.302 | 99.99th=[33817] 00:17:07.302 bw ( KiB/s): min=16384, max=20480, per=21.17%, avg=18432.00, stdev=2896.31, samples=2 00:17:07.302 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:17:07.302 lat (msec) : 4=0.01%, 10=13.15%, 20=76.03%, 50=10.80% 00:17:07.302 cpu : usr=3.09%, sys=5.28%, ctx=504, majf=0, minf=1 00:17:07.302 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:07.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:07.302 issued rwts: total=4417,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:07.302 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:07.302 job3: (groupid=0, jobs=1): err= 0: pid=2080172: Fri Jul 12 10:55:24 2024 00:17:07.302 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:17:07.302 slat (nsec): min=923, max=8805.9k, avg=97650.77, stdev=542434.89 00:17:07.302 clat (usec): min=5101, max=26210, avg=12788.83, stdev=3473.25 00:17:07.302 lat (usec): min=5110, max=26215, avg=12886.48, stdev=3478.45 00:17:07.302 clat percentiles (usec): 00:17:07.302 | 1.00th=[ 6652], 5.00th=[ 7963], 10.00th=[ 8356], 20.00th=[10028], 00:17:07.302 | 30.00th=[10945], 40.00th=[12125], 50.00th=[12780], 60.00th=[13566], 00:17:07.302 | 70.00th=[14353], 80.00th=[14877], 90.00th=[15926], 95.00th=[17957], 00:17:07.302 | 99.00th=[25822], 99.50th=[26084], 99.90th=[26084], 99.95th=[26084], 00:17:07.302 | 99.99th=[26084] 00:17:07.302 write: IOPS=5521, BW=21.6MiB/s (22.6MB/s)(21.6MiB/1002msec); 0 zone resets 00:17:07.302 slat (nsec): min=1552, max=5068.5k, avg=83076.26, stdev=444357.81 00:17:07.302 clat (usec): min=973, max=25142, avg=10937.38, stdev=3140.98 00:17:07.302 lat (usec): min=1431, max=25149, avg=11020.46, stdev=3129.40 00:17:07.302 clat percentiles (usec): 00:17:07.302 | 1.00th=[ 3163], 5.00th=[ 5932], 10.00th=[ 7046], 20.00th=[ 9110], 00:17:07.302 | 30.00th=[ 9503], 40.00th=[10290], 50.00th=[11207], 60.00th=[11731], 00:17:07.302 | 70.00th=[11994], 80.00th=[12256], 90.00th=[14484], 95.00th=[16319], 00:17:07.302 | 99.00th=[21627], 99.50th=[23987], 99.90th=[25035], 99.95th=[25035], 00:17:07.302 | 99.99th=[25035] 00:17:07.302 bw ( KiB/s): min=20712, max=22536, per=24.83%, avg=21624.00, stdev=1289.76, samples=2 00:17:07.302 iops : min= 5178, max= 5634, avg=5406.00, stdev=322.44, samples=2 00:17:07.302 lat (usec) : 1000=0.01% 00:17:07.302 lat (msec) : 2=0.18%, 4=0.61%, 10=28.39%, 20=68.37%, 50=2.45% 00:17:07.302 cpu : usr=3.50%, sys=5.89%, ctx=418, majf=0, minf=1 00:17:07.302 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:07.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:07.302 issued rwts: total=5120,5533,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:07.302 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:07.302 00:17:07.303 Run status group 0 (all jobs): 00:17:07.303 READ: bw=79.8MiB/s (83.7MB/s), 17.2MiB/s-21.9MiB/s (18.0MB/s-22.9MB/s), io=80.3MiB (84.2MB), run=1002-1006msec 00:17:07.303 WRITE: bw=85.0MiB/s (89.2MB/s), 17.9MiB/s-23.8MiB/s (18.8MB/s-24.9MB/s), io=85.5MiB (89.7MB), run=1002-1006msec 00:17:07.303 00:17:07.303 Disk stats (read/write): 00:17:07.303 nvme0n1: ios=4271/4608, merge=0/0, ticks=22786/25909, in_queue=48695, util=86.87% 00:17:07.303 nvme0n2: ios=4669/5120, merge=0/0, ticks=21806/24117, in_queue=45923, util=86.75% 00:17:07.303 nvme0n3: ios=3610/3954, merge=0/0, ticks=17311/15261, in_queue=32572, util=100.00% 00:17:07.303 nvme0n4: ios=4167/4608, merge=0/0, ticks=20357/16897, in_queue=37254, util=96.06% 00:17:07.303 10:55:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:17:07.303 10:55:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2080274 00:17:07.303 10:55:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:17:07.303 10:55:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:07.303 [global] 00:17:07.303 thread=1 00:17:07.303 invalidate=1 00:17:07.303 rw=read 00:17:07.303 time_based=1 00:17:07.303 runtime=10 00:17:07.303 ioengine=libaio 00:17:07.303 direct=1 00:17:07.303 bs=4096 00:17:07.303 iodepth=1 00:17:07.303 norandommap=1 00:17:07.303 numjobs=1 00:17:07.303 00:17:07.303 [job0] 00:17:07.303 filename=/dev/nvme0n1 00:17:07.303 [job1] 00:17:07.303 filename=/dev/nvme0n2 00:17:07.303 [job2] 00:17:07.303 filename=/dev/nvme0n3 00:17:07.303 [job3] 00:17:07.303 filename=/dev/nvme0n4 00:17:07.303 Could not set queue depth (nvme0n1) 00:17:07.303 Could not set queue depth (nvme0n2) 00:17:07.303 Could not set queue depth (nvme0n3) 00:17:07.303 Could not set queue depth (nvme0n4) 00:17:07.563 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:07.563 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:07.563 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:07.563 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:07.563 fio-3.35 00:17:07.563 Starting 4 threads 00:17:10.174 10:55:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:10.436 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=7348224, buflen=4096 00:17:10.436 fio: pid=2080696, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:10.436 10:55:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:10.436 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=10616832, buflen=4096 00:17:10.436 fio: pid=2080690, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:10.436 10:55:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:10.436 10:55:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:10.697 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=9256960, buflen=4096 00:17:10.697 fio: pid=2080649, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:10.697 10:55:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:10.697 10:55:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:10.959 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=13398016, buflen=4096 00:17:10.959 fio: pid=2080669, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:10.959 10:55:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:10.959 10:55:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:10.959 00:17:10.959 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2080649: Fri Jul 12 10:55:27 2024 00:17:10.959 read: IOPS=772, BW=3088KiB/s (3163kB/s)(9040KiB/2927msec) 00:17:10.959 slat (usec): min=6, max=21690, avg=50.11, stdev=661.25 00:17:10.959 clat (usec): min=170, max=42655, avg=1229.54, stdev=3518.76 00:17:10.959 lat (usec): min=177, max=42684, avg=1279.67, stdev=3576.20 00:17:10.959 clat percentiles (usec): 00:17:10.959 | 1.00th=[ 449], 5.00th=[ 586], 10.00th=[ 676], 20.00th=[ 758], 00:17:10.959 | 30.00th=[ 840], 40.00th=[ 898], 50.00th=[ 930], 60.00th=[ 963], 00:17:10.959 | 70.00th=[ 1004], 80.00th=[ 1057], 90.00th=[ 1106], 95.00th=[ 1205], 00:17:10.959 | 99.00th=[ 1401], 99.50th=[41681], 99.90th=[42206], 99.95th=[42730], 00:17:10.959 | 99.99th=[42730] 00:17:10.959 bw ( KiB/s): min= 1760, max= 4584, per=24.96%, avg=3190.40, stdev=1290.77, samples=5 00:17:10.959 iops : min= 440, max= 1146, avg=797.60, stdev=322.69, samples=5 00:17:10.959 lat (usec) : 250=0.13%, 500=1.86%, 750=16.45%, 1000=51.39% 00:17:10.959 lat (msec) : 2=29.19%, 10=0.09%, 20=0.09%, 50=0.75% 00:17:10.959 cpu : usr=0.75%, sys=2.22%, ctx=2266, majf=0, minf=1 00:17:10.959 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:10.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.959 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.959 issued rwts: total=2261,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:10.959 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:10.959 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2080669: Fri Jul 12 10:55:27 2024 00:17:10.959 read: IOPS=1054, BW=4215KiB/s (4316kB/s)(12.8MiB/3104msec) 00:17:10.959 slat (usec): min=6, max=16604, avg=36.95, stdev=389.54 00:17:10.959 clat (usec): min=226, max=42659, avg=898.44, stdev=2009.88 00:17:10.959 lat (usec): min=233, max=42684, avg=935.40, stdev=2048.28 00:17:10.959 clat percentiles (usec): 00:17:10.959 | 1.00th=[ 424], 5.00th=[ 506], 10.00th=[ 545], 20.00th=[ 627], 00:17:10.959 | 30.00th=[ 717], 40.00th=[ 775], 50.00th=[ 832], 60.00th=[ 881], 00:17:10.959 | 70.00th=[ 914], 80.00th=[ 947], 90.00th=[ 971], 95.00th=[ 996], 00:17:10.959 | 99.00th=[ 1057], 99.50th=[ 1106], 99.90th=[42206], 99.95th=[42730], 00:17:10.959 | 99.99th=[42730] 00:17:10.959 bw ( KiB/s): min= 2750, max= 5416, per=33.51%, avg=4282.33, stdev=903.43, samples=6 00:17:10.959 iops : min= 687, max= 1354, avg=1070.50, stdev=226.03, samples=6 00:17:10.959 lat (usec) : 250=0.03%, 500=4.61%, 750=30.99%, 1000=59.87% 00:17:10.959 lat (msec) : 2=4.13%, 10=0.06%, 20=0.03%, 50=0.24% 00:17:10.959 cpu : usr=1.00%, sys=3.09%, ctx=3278, majf=0, minf=1 00:17:10.959 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:10.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.959 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.959 issued rwts: total=3272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:10.959 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:10.959 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2080690: Fri Jul 12 10:55:27 2024 00:17:10.959 read: IOPS=938, BW=3752KiB/s (3843kB/s)(10.1MiB/2763msec) 00:17:10.959 slat (usec): min=6, max=16899, avg=36.60, stdev=395.59 00:17:10.959 clat (usec): min=413, max=41425, avg=1011.74, stdev=801.85 00:17:10.959 lat (usec): min=438, max=41450, avg=1048.34, stdev=894.00 00:17:10.959 clat percentiles (usec): 00:17:10.959 | 1.00th=[ 627], 5.00th=[ 783], 10.00th=[ 857], 20.00th=[ 930], 00:17:10.959 | 30.00th=[ 955], 40.00th=[ 996], 50.00th=[ 1020], 60.00th=[ 1045], 00:17:10.959 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1106], 95.00th=[ 1123], 00:17:10.959 | 99.00th=[ 1172], 99.50th=[ 1188], 99.90th=[ 1270], 99.95th=[ 1287], 00:17:10.959 | 99.99th=[41681] 00:17:10.959 bw ( KiB/s): min= 3728, max= 3872, per=29.87%, avg=3817.60, stdev=56.11, samples=5 00:17:10.959 iops : min= 932, max= 968, avg=954.40, stdev=14.03, samples=5 00:17:10.959 lat (usec) : 500=0.08%, 750=3.70%, 1000=38.91% 00:17:10.959 lat (msec) : 2=57.23%, 50=0.04% 00:17:10.959 cpu : usr=0.69%, sys=3.19%, ctx=2596, majf=0, minf=1 00:17:10.959 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:10.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.959 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.959 issued rwts: total=2593,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:10.959 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:10.959 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2080696: Fri Jul 12 10:55:27 2024 00:17:10.959 read: IOPS=695, BW=2780KiB/s (2847kB/s)(7176KiB/2581msec) 00:17:10.959 slat (nsec): min=6102, max=62600, avg=13971.77, stdev=8818.91 00:17:10.959 clat (usec): min=539, max=42126, avg=1408.01, stdev=4509.41 00:17:10.959 lat (usec): min=545, max=42152, avg=1421.97, stdev=4510.76 00:17:10.959 clat percentiles (usec): 00:17:10.959 | 1.00th=[ 611], 5.00th=[ 758], 10.00th=[ 824], 20.00th=[ 848], 00:17:10.959 | 30.00th=[ 873], 40.00th=[ 889], 50.00th=[ 906], 60.00th=[ 922], 00:17:10.959 | 70.00th=[ 947], 80.00th=[ 971], 90.00th=[ 1012], 95.00th=[ 1057], 00:17:10.959 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:10.959 | 99.99th=[42206] 00:17:10.959 bw ( KiB/s): min= 96, max= 4576, per=21.72%, avg=2776.00, stdev=2199.95, samples=5 00:17:10.959 iops : min= 24, max= 1144, avg=694.00, stdev=549.99, samples=5 00:17:10.959 lat (usec) : 750=4.74%, 1000=82.67% 00:17:10.959 lat (msec) : 2=11.31%, 50=1.23% 00:17:10.959 cpu : usr=0.39%, sys=1.12%, ctx=1799, majf=0, minf=2 00:17:10.959 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:10.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.959 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.959 issued rwts: total=1795,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:10.959 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:10.959 00:17:10.959 Run status group 0 (all jobs): 00:17:10.959 READ: bw=12.5MiB/s (13.1MB/s), 2780KiB/s-4215KiB/s (2847kB/s-4316kB/s), io=38.7MiB (40.6MB), run=2581-3104msec 00:17:10.959 00:17:10.959 Disk stats (read/write): 00:17:10.959 nvme0n1: ios=2167/0, merge=0/0, ticks=2617/0, in_queue=2617, util=92.89% 00:17:10.959 nvme0n2: ios=3312/0, merge=0/0, ticks=3114/0, in_queue=3114, util=98.61% 00:17:10.959 nvme0n3: ios=2519/0, merge=0/0, ticks=3448/0, in_queue=3448, util=99.22% 00:17:10.959 nvme0n4: ios=1601/0, merge=0/0, ticks=3315/0, in_queue=3315, util=99.06% 00:17:10.959 10:55:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:10.959 10:55:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:11.220 10:55:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:11.220 10:55:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:11.481 10:55:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:11.481 10:55:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:11.481 10:55:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:11.481 10:55:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:11.742 10:55:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:17:11.742 10:55:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 2080274 00:17:11.742 10:55:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:17:11.742 10:55:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:11.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:11.742 10:55:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:11.742 10:55:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:17:11.742 10:55:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:11.742 10:55:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:11.742 10:55:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:11.742 10:55:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:11.742 10:55:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:17:11.742 10:55:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:11.742 10:55:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:11.742 nvmf hotplug test: fio failed as expected 00:17:11.742 10:55:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:12.003 10:55:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:12.003 10:55:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:12.003 10:55:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:12.003 10:55:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:12.003 10:55:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:17:12.003 10:55:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:12.003 10:55:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:17:12.003 10:55:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:12.003 10:55:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:17:12.003 10:55:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:12.003 10:55:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:12.003 rmmod nvme_tcp 00:17:12.003 rmmod nvme_fabrics 00:17:12.003 rmmod nvme_keyring 00:17:12.003 10:55:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:12.003 10:55:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:17:12.003 10:55:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:17:12.003 10:55:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2076690 ']' 00:17:12.003 10:55:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2076690 00:17:12.003 10:55:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 2076690 ']' 00:17:12.003 10:55:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 2076690 00:17:12.003 10:55:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:17:12.003 10:55:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:12.003 10:55:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2076690 00:17:12.264 10:55:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:12.264 10:55:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:12.264 10:55:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2076690' 00:17:12.264 killing process with pid 2076690 00:17:12.264 10:55:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 2076690 00:17:12.264 10:55:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 2076690 00:17:12.264 10:55:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:12.264 10:55:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:12.264 10:55:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:12.264 10:55:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:12.264 10:55:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:12.264 10:55:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.264 10:55:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:12.264 10:55:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.813 10:55:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:14.813 00:17:14.813 real 0m28.842s 00:17:14.813 user 2m38.103s 00:17:14.813 sys 0m9.493s 00:17:14.813 10:55:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:14.813 10:55:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.813 ************************************ 00:17:14.813 END TEST nvmf_fio_target 00:17:14.813 ************************************ 00:17:14.813 10:55:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:14.813 10:55:31 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:14.813 10:55:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:14.813 10:55:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:14.813 10:55:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:14.813 ************************************ 00:17:14.813 START TEST nvmf_bdevio 00:17:14.813 ************************************ 00:17:14.813 10:55:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:14.813 * Looking for test storage... 00:17:14.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:14.813 10:55:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:14.813 10:55:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:17:14.813 10:55:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:14.813 10:55:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:17:14.814 10:55:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:21.406 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:21.406 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:17:21.406 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:21.406 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:21.406 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:21.406 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:21.406 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:21.406 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:17:21.406 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:21.406 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:17:21.406 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:17:21.406 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:17:21.406 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:17:21.406 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:17:21.406 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:17:21.406 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:21.406 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:21.406 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:21.406 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:21.406 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:21.406 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:21.406 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:21.406 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:21.406 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:21.406 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:21.406 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:21.406 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:21.406 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:21.406 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:21.406 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:21.407 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:21.407 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:21.407 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:21.407 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:21.407 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:21.669 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:21.669 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:21.669 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:21.669 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:21.669 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:21.669 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:21.669 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:21.931 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:21.931 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:21.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:21.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.541 ms 00:17:21.931 00:17:21.931 --- 10.0.0.2 ping statistics --- 00:17:21.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.931 rtt min/avg/max/mdev = 0.541/0.541/0.541/0.000 ms 00:17:21.931 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:21.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:21.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:17:21.931 00:17:21.931 --- 10.0.0.1 ping statistics --- 00:17:21.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.931 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:17:21.931 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:21.931 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:17:21.931 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:21.931 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:21.931 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:21.931 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:21.931 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:21.931 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:21.931 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:21.931 10:55:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:21.931 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:21.931 10:55:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:21.931 10:55:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:21.931 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2085711 00:17:21.931 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2085711 00:17:21.931 10:55:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:21.931 10:55:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 2085711 ']' 00:17:21.931 10:55:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.931 10:55:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:21.931 10:55:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.931 10:55:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:21.931 10:55:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:21.931 [2024-07-12 10:55:38.799725] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:21.931 [2024-07-12 10:55:38.799791] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:21.931 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.931 [2024-07-12 10:55:38.888343] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:22.192 [2024-07-12 10:55:38.983173] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.192 [2024-07-12 10:55:38.983230] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.192 [2024-07-12 10:55:38.983238] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:22.192 [2024-07-12 10:55:38.983245] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:22.192 [2024-07-12 10:55:38.983251] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.192 [2024-07-12 10:55:38.983445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:22.192 [2024-07-12 10:55:38.983606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:22.192 [2024-07-12 10:55:38.983767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:22.192 [2024-07-12 10:55:38.983767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:22.765 10:55:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:22.765 10:55:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:17:22.765 10:55:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:22.765 10:55:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:22.765 10:55:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:22.765 10:55:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:22.765 10:55:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:22.765 10:55:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.765 10:55:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:22.765 [2024-07-12 10:55:39.646350] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:22.765 10:55:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.765 10:55:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:22.765 10:55:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.765 10:55:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:22.765 Malloc0 00:17:22.765 10:55:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.765 10:55:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:22.765 10:55:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.766 10:55:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:22.766 10:55:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.766 10:55:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:22.766 10:55:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.766 10:55:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:22.766 10:55:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.766 10:55:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:22.766 10:55:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.766 10:55:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:22.766 [2024-07-12 10:55:39.711856] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:22.766 10:55:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.766 10:55:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:22.766 10:55:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:22.766 10:55:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:17:22.766 10:55:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:17:22.766 10:55:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:22.766 10:55:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:22.766 { 00:17:22.766 "params": { 00:17:22.766 "name": "Nvme$subsystem", 00:17:22.766 "trtype": "$TEST_TRANSPORT", 00:17:22.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:22.766 "adrfam": "ipv4", 00:17:22.766 "trsvcid": "$NVMF_PORT", 00:17:22.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:22.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:22.766 "hdgst": ${hdgst:-false}, 00:17:22.766 "ddgst": ${ddgst:-false} 00:17:22.766 }, 00:17:22.766 "method": "bdev_nvme_attach_controller" 00:17:22.766 } 00:17:22.766 EOF 00:17:22.766 )") 00:17:22.766 10:55:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:17:22.766 10:55:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:17:22.766 10:55:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:17:22.766 10:55:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:22.766 "params": { 00:17:22.766 "name": "Nvme1", 00:17:22.766 "trtype": "tcp", 00:17:22.766 "traddr": "10.0.0.2", 00:17:22.766 "adrfam": "ipv4", 00:17:22.766 "trsvcid": "4420", 00:17:22.766 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:22.766 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:22.766 "hdgst": false, 00:17:22.766 "ddgst": false 00:17:22.766 }, 00:17:22.766 "method": "bdev_nvme_attach_controller" 00:17:22.766 }' 00:17:23.027 [2024-07-12 10:55:39.769417] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:23.027 [2024-07-12 10:55:39.769479] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2085757 ] 00:17:23.027 EAL: No free 2048 kB hugepages reported on node 1 00:17:23.027 [2024-07-12 10:55:39.852763] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:23.027 [2024-07-12 10:55:39.951774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:23.027 [2024-07-12 10:55:39.951939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.027 [2024-07-12 10:55:39.951939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:23.289 I/O targets: 00:17:23.289 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:23.289 00:17:23.289 00:17:23.289 CUnit - A unit testing framework for C - Version 2.1-3 00:17:23.289 http://cunit.sourceforge.net/ 00:17:23.289 00:17:23.289 00:17:23.289 Suite: bdevio tests on: Nvme1n1 00:17:23.289 Test: blockdev write read block ...passed 00:17:23.289 Test: blockdev write zeroes read block ...passed 00:17:23.289 Test: blockdev write zeroes read no split ...passed 00:17:23.289 Test: blockdev write zeroes read split ...passed 00:17:23.289 Test: blockdev write zeroes read split partial ...passed 00:17:23.289 Test: blockdev reset ...[2024-07-12 10:55:40.264626] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:23.289 [2024-07-12 10:55:40.264732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729ce0 (9): Bad file descriptor 00:17:23.550 [2024-07-12 10:55:40.362326] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:23.550 passed 00:17:23.550 Test: blockdev write read 8 blocks ...passed 00:17:23.550 Test: blockdev write read size > 128k ...passed 00:17:23.550 Test: blockdev write read invalid size ...passed 00:17:23.550 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:23.550 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:23.550 Test: blockdev write read max offset ...passed 00:17:23.811 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:23.811 Test: blockdev writev readv 8 blocks ...passed 00:17:23.811 Test: blockdev writev readv 30 x 1block ...passed 00:17:23.811 Test: blockdev writev readv block ...passed 00:17:23.811 Test: blockdev writev readv size > 128k ...passed 00:17:23.811 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:23.811 Test: blockdev comparev and writev ...[2024-07-12 10:55:40.669215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:23.811 [2024-07-12 10:55:40.669249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:23.811 [2024-07-12 10:55:40.669264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:23.811 [2024-07-12 10:55:40.669273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:23.811 [2024-07-12 10:55:40.669801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:23.811 [2024-07-12 10:55:40.669812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:23.811 [2024-07-12 10:55:40.669826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:23.811 [2024-07-12 10:55:40.669834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:23.811 [2024-07-12 10:55:40.670413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:23.811 [2024-07-12 10:55:40.670424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:23.811 [2024-07-12 10:55:40.670438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:23.811 [2024-07-12 10:55:40.670445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:23.811 [2024-07-12 10:55:40.671008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:23.811 [2024-07-12 10:55:40.671018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:23.811 [2024-07-12 10:55:40.671031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:23.811 [2024-07-12 10:55:40.671039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:23.811 passed 00:17:23.811 Test: blockdev nvme passthru rw ...passed 00:17:23.811 Test: blockdev nvme passthru vendor specific ...[2024-07-12 10:55:40.755107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:23.811 [2024-07-12 10:55:40.755120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:23.811 [2024-07-12 10:55:40.755523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:23.811 [2024-07-12 10:55:40.755533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:23.811 [2024-07-12 10:55:40.755932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:23.811 [2024-07-12 10:55:40.755942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:23.811 [2024-07-12 10:55:40.756346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:23.811 [2024-07-12 10:55:40.756357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:23.811 passed 00:17:23.811 Test: blockdev nvme admin passthru ...passed 00:17:24.073 Test: blockdev copy ...passed 00:17:24.073 00:17:24.073 Run Summary: Type Total Ran Passed Failed Inactive 00:17:24.073 suites 1 1 n/a 0 0 00:17:24.073 tests 23 23 23 0 0 00:17:24.073 asserts 152 152 152 0 n/a 00:17:24.073 00:17:24.073 Elapsed time = 1.389 seconds 00:17:24.073 10:55:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:24.073 10:55:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.073 10:55:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:24.073 10:55:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.073 10:55:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:24.073 10:55:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:17:24.073 10:55:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:24.073 10:55:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:17:24.073 10:55:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:24.073 10:55:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:17:24.073 10:55:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:24.073 10:55:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:24.073 rmmod nvme_tcp 00:17:24.073 rmmod nvme_fabrics 00:17:24.073 rmmod nvme_keyring 00:17:24.073 10:55:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:24.073 10:55:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:17:24.073 10:55:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:17:24.073 10:55:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2085711 ']' 00:17:24.073 10:55:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2085711 00:17:24.073 10:55:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 2085711 ']' 00:17:24.073 10:55:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 2085711 00:17:24.073 10:55:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:17:24.073 10:55:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:24.073 10:55:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2085711 00:17:24.332 10:55:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:24.332 10:55:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:24.332 10:55:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2085711' 00:17:24.332 killing process with pid 2085711 00:17:24.332 10:55:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 2085711 00:17:24.332 10:55:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 2085711 00:17:24.332 10:55:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:24.332 10:55:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:24.332 10:55:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:24.332 10:55:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:24.332 10:55:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:24.333 10:55:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.333 10:55:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:24.333 10:55:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.876 10:55:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:26.876 00:17:26.876 real 0m11.988s 00:17:26.876 user 0m13.135s 00:17:26.876 sys 0m6.078s 00:17:26.876 10:55:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:26.876 10:55:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:26.876 ************************************ 00:17:26.876 END TEST nvmf_bdevio 00:17:26.876 ************************************ 00:17:26.876 10:55:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:26.876 10:55:43 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:26.876 10:55:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:26.876 10:55:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:26.876 10:55:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:26.876 ************************************ 00:17:26.876 START TEST nvmf_auth_target 00:17:26.876 ************************************ 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:26.876 * Looking for test storage... 00:17:26.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:26.876 10:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.465 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:33.465 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:33.465 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:33.465 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:33.465 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:33.465 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:33.465 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:33.465 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:33.465 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:33.465 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:33.465 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:33.465 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:33.465 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:33.465 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:33.465 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:33.466 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:33.466 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:33.466 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:33.466 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:33.466 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:33.726 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:33.726 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:33.726 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:33.726 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:33.726 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:33.726 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:33.987 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:33.987 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:33.987 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.527 ms 00:17:33.987 00:17:33.987 --- 10.0.0.2 ping statistics --- 00:17:33.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.987 rtt min/avg/max/mdev = 0.527/0.527/0.527/0.000 ms 00:17:33.987 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:33.987 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:33.987 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:17:33.987 00:17:33.987 --- 10.0.0.1 ping statistics --- 00:17:33.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.987 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:17:33.987 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:33.987 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:33.987 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:33.987 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:33.987 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:33.987 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:33.987 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:33.987 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:33.987 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:33.987 10:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:17:33.987 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:33.987 10:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:33.987 10:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.987 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2090160 00:17:33.987 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:33.987 10:55:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2090160 00:17:33.987 10:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2090160 ']' 00:17:33.987 10:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.987 10:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:33.987 10:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.987 10:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:33.987 10:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2090429 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d46fcfaaf454f2e6c0e597b1808dff1afde9de09b74a3f73 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ajI 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d46fcfaaf454f2e6c0e597b1808dff1afde9de09b74a3f73 0 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d46fcfaaf454f2e6c0e597b1808dff1afde9de09b74a3f73 0 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d46fcfaaf454f2e6c0e597b1808dff1afde9de09b74a3f73 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ajI 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ajI 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.ajI 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1f38f555dec19bee093ec1e75e6087b1d81e0d7e9f6edbdaae2174fa4b0c8ca3 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.bLZ 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1f38f555dec19bee093ec1e75e6087b1d81e0d7e9f6edbdaae2174fa4b0c8ca3 3 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1f38f555dec19bee093ec1e75e6087b1d81e0d7e9f6edbdaae2174fa4b0c8ca3 3 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1f38f555dec19bee093ec1e75e6087b1d81e0d7e9f6edbdaae2174fa4b0c8ca3 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.bLZ 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.bLZ 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.bLZ 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=43cac65c3267f090d8e1c5b8d919d860 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.AiM 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 43cac65c3267f090d8e1c5b8d919d860 1 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 43cac65c3267f090d8e1c5b8d919d860 1 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=43cac65c3267f090d8e1c5b8d919d860 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.AiM 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.AiM 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.AiM 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=47764efee3f0a26ed706c38416a619ffb1109c410e27e0c6 00:17:34.930 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:34.931 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Bya 00:17:34.931 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 47764efee3f0a26ed706c38416a619ffb1109c410e27e0c6 2 00:17:34.931 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 47764efee3f0a26ed706c38416a619ffb1109c410e27e0c6 2 00:17:34.931 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:34.931 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:34.931 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=47764efee3f0a26ed706c38416a619ffb1109c410e27e0c6 00:17:34.931 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:34.931 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:35.192 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Bya 00:17:35.192 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Bya 00:17:35.192 10:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.Bya 00:17:35.192 10:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:17:35.192 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:35.192 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:35.192 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:35.192 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:35.192 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:35.192 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:35.192 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=58522f0766cdeef3730c47ef57445ad022eec507a6dc2575 00:17:35.192 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:35.192 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.V2r 00:17:35.192 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 58522f0766cdeef3730c47ef57445ad022eec507a6dc2575 2 00:17:35.192 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 58522f0766cdeef3730c47ef57445ad022eec507a6dc2575 2 00:17:35.192 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:35.192 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:35.192 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=58522f0766cdeef3730c47ef57445ad022eec507a6dc2575 00:17:35.192 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:35.192 10:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.V2r 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.V2r 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.V2r 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=92d35e64212ed95376f315d7aa7822c7 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.u5V 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 92d35e64212ed95376f315d7aa7822c7 1 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 92d35e64212ed95376f315d7aa7822c7 1 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=92d35e64212ed95376f315d7aa7822c7 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.u5V 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.u5V 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.u5V 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=693ef841da76743aa01c1a996216fe20b9685d58af6a40afa68e354e0cbf2051 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.dpp 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 693ef841da76743aa01c1a996216fe20b9685d58af6a40afa68e354e0cbf2051 3 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 693ef841da76743aa01c1a996216fe20b9685d58af6a40afa68e354e0cbf2051 3 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=693ef841da76743aa01c1a996216fe20b9685d58af6a40afa68e354e0cbf2051 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.dpp 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.dpp 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.dpp 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2090160 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2090160 ']' 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:35.192 10:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.452 10:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:35.452 10:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:35.452 10:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2090429 /var/tmp/host.sock 00:17:35.452 10:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2090429 ']' 00:17:35.452 10:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:17:35.452 10:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:35.452 10:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:35.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:35.452 10:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:35.452 10:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.712 10:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:35.712 10:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:35.712 10:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:17:35.712 10:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.712 10:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.712 10:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.712 10:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:35.712 10:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ajI 00:17:35.712 10:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.712 10:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.712 10:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.712 10:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.ajI 00:17:35.712 10:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.ajI 00:17:35.973 10:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.bLZ ]] 00:17:35.973 10:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bLZ 00:17:35.973 10:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.973 10:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.973 10:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.973 10:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bLZ 00:17:35.973 10:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bLZ 00:17:35.973 10:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:35.973 10:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.AiM 00:17:35.973 10:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.973 10:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.973 10:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.973 10:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.AiM 00:17:35.973 10:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.AiM 00:17:36.233 10:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.Bya ]] 00:17:36.233 10:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Bya 00:17:36.233 10:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.233 10:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.233 10:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.233 10:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Bya 00:17:36.233 10:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Bya 00:17:36.493 10:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:36.493 10:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.V2r 00:17:36.493 10:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.493 10:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.493 10:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.493 10:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.V2r 00:17:36.493 10:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.V2r 00:17:36.753 10:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.u5V ]] 00:17:36.753 10:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.u5V 00:17:36.753 10:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.753 10:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.753 10:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.753 10:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.u5V 00:17:36.753 10:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.u5V 00:17:36.753 10:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:36.753 10:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.dpp 00:17:36.753 10:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.753 10:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.753 10:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.753 10:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.dpp 00:17:36.753 10:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.dpp 00:17:37.013 10:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:17:37.013 10:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:37.013 10:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:37.013 10:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.013 10:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:37.013 10:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:37.274 10:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:17:37.274 10:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.274 10:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:37.274 10:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:37.274 10:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:37.274 10:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.274 10:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.274 10:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.274 10:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.274 10:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.274 10:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.274 10:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.535 00:17:37.535 10:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.535 10:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.535 10:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.535 10:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.535 10:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.535 10:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.535 10:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.796 10:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.796 10:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:37.796 { 00:17:37.796 "cntlid": 1, 00:17:37.796 "qid": 0, 00:17:37.796 "state": "enabled", 00:17:37.796 "thread": "nvmf_tgt_poll_group_000", 00:17:37.796 "listen_address": { 00:17:37.796 "trtype": "TCP", 00:17:37.796 "adrfam": "IPv4", 00:17:37.796 "traddr": "10.0.0.2", 00:17:37.796 "trsvcid": "4420" 00:17:37.796 }, 00:17:37.796 "peer_address": { 00:17:37.796 "trtype": "TCP", 00:17:37.796 "adrfam": "IPv4", 00:17:37.796 "traddr": "10.0.0.1", 00:17:37.796 "trsvcid": "39710" 00:17:37.796 }, 00:17:37.796 "auth": { 00:17:37.796 "state": "completed", 00:17:37.796 "digest": "sha256", 00:17:37.796 "dhgroup": "null" 00:17:37.796 } 00:17:37.796 } 00:17:37.796 ]' 00:17:37.796 10:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:37.796 10:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:37.796 10:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:37.796 10:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:37.796 10:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:37.796 10:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.796 10:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.796 10:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.056 10:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDQ2ZmNmYWFmNDU0ZjJlNmMwZTU5N2IxODA4ZGZmMWFmZGU5ZGUwOWI3NGEzZjcztUvQ6w==: --dhchap-ctrl-secret DHHC-1:03:MWYzOGY1NTVkZWMxOWJlZTA5M2VjMWU3NWU2MDg3YjFkODFlMGQ3ZTlmNmVkYmRhYWUyMTc0ZmE0YjBjOGNhM2ZIXgs=: 00:17:38.628 10:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.628 10:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:38.628 10:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.628 10:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.628 10:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.628 10:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.628 10:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:38.628 10:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:38.888 10:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:17:38.888 10:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:38.888 10:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:38.888 10:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:38.888 10:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:38.888 10:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.888 10:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.888 10:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.888 10:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.888 10:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.888 10:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.888 10:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.147 00:17:39.147 10:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.147 10:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.147 10:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.147 10:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.147 10:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.147 10:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.147 10:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.147 10:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.147 10:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.147 { 00:17:39.147 "cntlid": 3, 00:17:39.147 "qid": 0, 00:17:39.147 "state": "enabled", 00:17:39.147 "thread": "nvmf_tgt_poll_group_000", 00:17:39.147 "listen_address": { 00:17:39.147 "trtype": "TCP", 00:17:39.147 "adrfam": "IPv4", 00:17:39.147 "traddr": "10.0.0.2", 00:17:39.147 "trsvcid": "4420" 00:17:39.147 }, 00:17:39.147 "peer_address": { 00:17:39.147 "trtype": "TCP", 00:17:39.147 "adrfam": "IPv4", 00:17:39.147 "traddr": "10.0.0.1", 00:17:39.147 "trsvcid": "39728" 00:17:39.147 }, 00:17:39.147 "auth": { 00:17:39.147 "state": "completed", 00:17:39.147 "digest": "sha256", 00:17:39.147 "dhgroup": "null" 00:17:39.147 } 00:17:39.147 } 00:17:39.147 ]' 00:17:39.147 10:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.147 10:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:39.147 10:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.407 10:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:39.407 10:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:39.407 10:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.407 10:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.407 10:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.407 10:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NDNjYWM2NWMzMjY3ZjA5MGQ4ZTFjNWI4ZDkxOWQ4NjDxaRGm: --dhchap-ctrl-secret DHHC-1:02:NDc3NjRlZmVlM2YwYTI2ZWQ3MDZjMzg0MTZhNjE5ZmZiMTEwOWM0MTBlMjdlMGM2+7Rj5g==: 00:17:39.977 10:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.978 10:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:39.978 10:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.978 10:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.978 10:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.978 10:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:39.978 10:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:39.978 10:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:40.238 10:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:17:40.238 10:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.238 10:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:40.238 10:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:40.238 10:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:40.238 10:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.238 10:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.238 10:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.238 10:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.238 10:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.238 10:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.238 10:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.499 00:17:40.499 10:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:40.499 10:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:40.499 10:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.759 10:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.759 10:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.759 10:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.759 10:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.759 10:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.759 10:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:40.759 { 00:17:40.759 "cntlid": 5, 00:17:40.759 "qid": 0, 00:17:40.759 "state": "enabled", 00:17:40.759 "thread": "nvmf_tgt_poll_group_000", 00:17:40.759 "listen_address": { 00:17:40.759 "trtype": "TCP", 00:17:40.760 "adrfam": "IPv4", 00:17:40.760 "traddr": "10.0.0.2", 00:17:40.760 "trsvcid": "4420" 00:17:40.760 }, 00:17:40.760 "peer_address": { 00:17:40.760 "trtype": "TCP", 00:17:40.760 "adrfam": "IPv4", 00:17:40.760 "traddr": "10.0.0.1", 00:17:40.760 "trsvcid": "39748" 00:17:40.760 }, 00:17:40.760 "auth": { 00:17:40.760 "state": "completed", 00:17:40.760 "digest": "sha256", 00:17:40.760 "dhgroup": "null" 00:17:40.760 } 00:17:40.760 } 00:17:40.760 ]' 00:17:40.760 10:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:40.760 10:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:40.760 10:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:40.760 10:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:40.760 10:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:40.760 10:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.760 10:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.760 10:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.020 10:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NTg1MjJmMDc2NmNkZWVmMzczMGM0N2VmNTc0NDVhZDAyMmVlYzUwN2E2ZGMyNTc1dRKDRw==: --dhchap-ctrl-secret DHHC-1:01:OTJkMzVlNjQyMTJlZDk1Mzc2ZjMxNWQ3YWE3ODIyYzccR8gb: 00:17:41.592 10:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.592 10:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:41.592 10:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.592 10:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.592 10:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.592 10:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.592 10:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:41.592 10:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:41.853 10:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:17:41.853 10:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.853 10:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:41.853 10:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:41.853 10:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:41.853 10:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.853 10:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:41.853 10:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.853 10:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.853 10:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.853 10:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:41.853 10:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:42.114 00:17:42.114 10:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.114 10:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.114 10:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.114 10:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.114 10:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.114 10:55:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.114 10:55:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.114 10:55:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.114 10:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.114 { 00:17:42.114 "cntlid": 7, 00:17:42.114 "qid": 0, 00:17:42.114 "state": "enabled", 00:17:42.114 "thread": "nvmf_tgt_poll_group_000", 00:17:42.114 "listen_address": { 00:17:42.114 "trtype": "TCP", 00:17:42.114 "adrfam": "IPv4", 00:17:42.114 "traddr": "10.0.0.2", 00:17:42.114 "trsvcid": "4420" 00:17:42.114 }, 00:17:42.114 "peer_address": { 00:17:42.114 "trtype": "TCP", 00:17:42.114 "adrfam": "IPv4", 00:17:42.114 "traddr": "10.0.0.1", 00:17:42.114 "trsvcid": "38590" 00:17:42.114 }, 00:17:42.114 "auth": { 00:17:42.114 "state": "completed", 00:17:42.114 "digest": "sha256", 00:17:42.114 "dhgroup": "null" 00:17:42.114 } 00:17:42.114 } 00:17:42.114 ]' 00:17:42.114 10:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:42.375 10:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:42.375 10:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:42.375 10:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:42.375 10:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:42.375 10:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.375 10:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.375 10:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.375 10:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NjkzZWY4NDFkYTc2NzQzYWEwMWMxYTk5NjIxNmZlMjBiOTY4NWQ1OGFmNmE0MGFmYTY4ZTM1NGUwY2JmMjA1MSQUGYw=: 00:17:43.381 10:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.381 10:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:43.381 10:55:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.381 10:55:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.381 10:55:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.381 10:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:43.381 10:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:43.381 10:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:43.381 10:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:43.381 10:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:17:43.381 10:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:43.381 10:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:43.381 10:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:43.381 10:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:43.381 10:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.381 10:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.381 10:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.381 10:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.381 10:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.381 10:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.381 10:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.649 00:17:43.649 10:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:43.649 10:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:43.649 10:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.649 10:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.649 10:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.649 10:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.649 10:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.649 10:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.649 10:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:43.649 { 00:17:43.649 "cntlid": 9, 00:17:43.649 "qid": 0, 00:17:43.649 "state": "enabled", 00:17:43.649 "thread": "nvmf_tgt_poll_group_000", 00:17:43.649 "listen_address": { 00:17:43.649 "trtype": "TCP", 00:17:43.649 "adrfam": "IPv4", 00:17:43.649 "traddr": "10.0.0.2", 00:17:43.649 "trsvcid": "4420" 00:17:43.649 }, 00:17:43.649 "peer_address": { 00:17:43.649 "trtype": "TCP", 00:17:43.649 "adrfam": "IPv4", 00:17:43.649 "traddr": "10.0.0.1", 00:17:43.649 "trsvcid": "38628" 00:17:43.649 }, 00:17:43.649 "auth": { 00:17:43.649 "state": "completed", 00:17:43.649 "digest": "sha256", 00:17:43.649 "dhgroup": "ffdhe2048" 00:17:43.649 } 00:17:43.649 } 00:17:43.649 ]' 00:17:43.649 10:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.910 10:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:43.910 10:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.910 10:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:43.910 10:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.910 10:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.910 10:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.910 10:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.171 10:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDQ2ZmNmYWFmNDU0ZjJlNmMwZTU5N2IxODA4ZGZmMWFmZGU5ZGUwOWI3NGEzZjcztUvQ6w==: --dhchap-ctrl-secret DHHC-1:03:MWYzOGY1NTVkZWMxOWJlZTA5M2VjMWU3NWU2MDg3YjFkODFlMGQ3ZTlmNmVkYmRhYWUyMTc0ZmE0YjBjOGNhM2ZIXgs=: 00:17:44.741 10:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.741 10:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:44.741 10:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.741 10:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.741 10:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.741 10:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.741 10:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:44.741 10:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:45.001 10:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:17:45.001 10:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.001 10:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:45.001 10:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:45.001 10:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:45.001 10:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.001 10:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.001 10:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.001 10:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.001 10:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.001 10:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.001 10:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.001 00:17:45.261 10:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:45.261 10:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:45.261 10:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.261 10:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.261 10:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.261 10:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.261 10:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.261 10:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.261 10:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:45.261 { 00:17:45.261 "cntlid": 11, 00:17:45.261 "qid": 0, 00:17:45.261 "state": "enabled", 00:17:45.261 "thread": "nvmf_tgt_poll_group_000", 00:17:45.261 "listen_address": { 00:17:45.261 "trtype": "TCP", 00:17:45.261 "adrfam": "IPv4", 00:17:45.261 "traddr": "10.0.0.2", 00:17:45.261 "trsvcid": "4420" 00:17:45.261 }, 00:17:45.261 "peer_address": { 00:17:45.261 "trtype": "TCP", 00:17:45.261 "adrfam": "IPv4", 00:17:45.261 "traddr": "10.0.0.1", 00:17:45.261 "trsvcid": "38652" 00:17:45.261 }, 00:17:45.261 "auth": { 00:17:45.261 "state": "completed", 00:17:45.261 "digest": "sha256", 00:17:45.261 "dhgroup": "ffdhe2048" 00:17:45.261 } 00:17:45.261 } 00:17:45.261 ]' 00:17:45.261 10:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:45.261 10:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:45.261 10:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:45.522 10:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:45.522 10:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:45.522 10:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.522 10:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.522 10:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.522 10:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NDNjYWM2NWMzMjY3ZjA5MGQ4ZTFjNWI4ZDkxOWQ4NjDxaRGm: --dhchap-ctrl-secret DHHC-1:02:NDc3NjRlZmVlM2YwYTI2ZWQ3MDZjMzg0MTZhNjE5ZmZiMTEwOWM0MTBlMjdlMGM2+7Rj5g==: 00:17:46.464 10:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.464 10:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:46.464 10:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.464 10:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.464 10:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.464 10:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:46.464 10:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:46.464 10:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:46.464 10:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:17:46.464 10:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:46.464 10:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:46.464 10:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:46.464 10:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:46.464 10:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.464 10:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.464 10:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.464 10:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.464 10:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.464 10:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.465 10:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.725 00:17:46.725 10:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.725 10:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.725 10:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.725 10:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.725 10:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.725 10:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.985 10:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.985 10:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.985 10:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.985 { 00:17:46.985 "cntlid": 13, 00:17:46.985 "qid": 0, 00:17:46.985 "state": "enabled", 00:17:46.985 "thread": "nvmf_tgt_poll_group_000", 00:17:46.985 "listen_address": { 00:17:46.985 "trtype": "TCP", 00:17:46.985 "adrfam": "IPv4", 00:17:46.985 "traddr": "10.0.0.2", 00:17:46.985 "trsvcid": "4420" 00:17:46.985 }, 00:17:46.985 "peer_address": { 00:17:46.985 "trtype": "TCP", 00:17:46.985 "adrfam": "IPv4", 00:17:46.985 "traddr": "10.0.0.1", 00:17:46.985 "trsvcid": "38676" 00:17:46.985 }, 00:17:46.985 "auth": { 00:17:46.985 "state": "completed", 00:17:46.985 "digest": "sha256", 00:17:46.985 "dhgroup": "ffdhe2048" 00:17:46.985 } 00:17:46.985 } 00:17:46.985 ]' 00:17:46.985 10:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.985 10:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:46.985 10:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.985 10:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:46.985 10:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.985 10:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.985 10:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.985 10:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.246 10:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NTg1MjJmMDc2NmNkZWVmMzczMGM0N2VmNTc0NDVhZDAyMmVlYzUwN2E2ZGMyNTc1dRKDRw==: --dhchap-ctrl-secret DHHC-1:01:OTJkMzVlNjQyMTJlZDk1Mzc2ZjMxNWQ3YWE3ODIyYzccR8gb: 00:17:47.817 10:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.817 10:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:47.817 10:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.817 10:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.817 10:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.817 10:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.817 10:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:47.817 10:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:48.078 10:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:17:48.078 10:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.078 10:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:48.078 10:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:48.079 10:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:48.079 10:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.079 10:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:48.079 10:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.079 10:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.079 10:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.079 10:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:48.079 10:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:48.079 00:17:48.079 10:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:48.079 10:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:48.079 10:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.341 10:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.341 10:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.341 10:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.341 10:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.341 10:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.341 10:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.341 { 00:17:48.341 "cntlid": 15, 00:17:48.341 "qid": 0, 00:17:48.341 "state": "enabled", 00:17:48.341 "thread": "nvmf_tgt_poll_group_000", 00:17:48.341 "listen_address": { 00:17:48.341 "trtype": "TCP", 00:17:48.341 "adrfam": "IPv4", 00:17:48.341 "traddr": "10.0.0.2", 00:17:48.341 "trsvcid": "4420" 00:17:48.341 }, 00:17:48.341 "peer_address": { 00:17:48.341 "trtype": "TCP", 00:17:48.341 "adrfam": "IPv4", 00:17:48.341 "traddr": "10.0.0.1", 00:17:48.341 "trsvcid": "38708" 00:17:48.341 }, 00:17:48.341 "auth": { 00:17:48.341 "state": "completed", 00:17:48.341 "digest": "sha256", 00:17:48.341 "dhgroup": "ffdhe2048" 00:17:48.342 } 00:17:48.342 } 00:17:48.342 ]' 00:17:48.342 10:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.342 10:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:48.342 10:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.342 10:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:48.342 10:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.603 10:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.603 10:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.603 10:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.603 10:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NjkzZWY4NDFkYTc2NzQzYWEwMWMxYTk5NjIxNmZlMjBiOTY4NWQ1OGFmNmE0MGFmYTY4ZTM1NGUwY2JmMjA1MSQUGYw=: 00:17:49.173 10:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.173 10:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:49.173 10:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.173 10:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.173 10:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.173 10:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:49.173 10:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.173 10:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:49.173 10:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:49.433 10:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:17:49.433 10:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.433 10:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:49.433 10:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:49.433 10:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:49.433 10:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.433 10:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.433 10:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.433 10:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.433 10:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.433 10:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.433 10:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.694 00:17:49.694 10:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.694 10:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.694 10:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.694 10:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.694 10:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.694 10:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.694 10:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.954 10:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.954 10:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.954 { 00:17:49.954 "cntlid": 17, 00:17:49.954 "qid": 0, 00:17:49.954 "state": "enabled", 00:17:49.954 "thread": "nvmf_tgt_poll_group_000", 00:17:49.954 "listen_address": { 00:17:49.954 "trtype": "TCP", 00:17:49.954 "adrfam": "IPv4", 00:17:49.954 "traddr": "10.0.0.2", 00:17:49.954 "trsvcid": "4420" 00:17:49.954 }, 00:17:49.954 "peer_address": { 00:17:49.954 "trtype": "TCP", 00:17:49.954 "adrfam": "IPv4", 00:17:49.954 "traddr": "10.0.0.1", 00:17:49.954 "trsvcid": "38738" 00:17:49.954 }, 00:17:49.954 "auth": { 00:17:49.954 "state": "completed", 00:17:49.954 "digest": "sha256", 00:17:49.954 "dhgroup": "ffdhe3072" 00:17:49.954 } 00:17:49.954 } 00:17:49.954 ]' 00:17:49.954 10:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.954 10:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:49.954 10:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:49.954 10:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:49.954 10:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:49.954 10:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.954 10:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.954 10:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.214 10:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDQ2ZmNmYWFmNDU0ZjJlNmMwZTU5N2IxODA4ZGZmMWFmZGU5ZGUwOWI3NGEzZjcztUvQ6w==: --dhchap-ctrl-secret DHHC-1:03:MWYzOGY1NTVkZWMxOWJlZTA5M2VjMWU3NWU2MDg3YjFkODFlMGQ3ZTlmNmVkYmRhYWUyMTc0ZmE0YjBjOGNhM2ZIXgs=: 00:17:50.785 10:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.785 10:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:50.785 10:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.785 10:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.785 10:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.785 10:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.785 10:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:50.785 10:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:50.785 10:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:17:50.785 10:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.785 10:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:50.785 10:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:50.785 10:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:50.785 10:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.785 10:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.785 10:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.785 10:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.785 10:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.785 10:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.785 10:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.045 00:17:51.045 10:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:51.045 10:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.045 10:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.305 10:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.305 10:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.305 10:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.305 10:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.305 10:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.305 10:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:51.305 { 00:17:51.305 "cntlid": 19, 00:17:51.305 "qid": 0, 00:17:51.305 "state": "enabled", 00:17:51.305 "thread": "nvmf_tgt_poll_group_000", 00:17:51.305 "listen_address": { 00:17:51.305 "trtype": "TCP", 00:17:51.305 "adrfam": "IPv4", 00:17:51.305 "traddr": "10.0.0.2", 00:17:51.305 "trsvcid": "4420" 00:17:51.305 }, 00:17:51.305 "peer_address": { 00:17:51.305 "trtype": "TCP", 00:17:51.305 "adrfam": "IPv4", 00:17:51.305 "traddr": "10.0.0.1", 00:17:51.305 "trsvcid": "37274" 00:17:51.305 }, 00:17:51.305 "auth": { 00:17:51.305 "state": "completed", 00:17:51.305 "digest": "sha256", 00:17:51.305 "dhgroup": "ffdhe3072" 00:17:51.305 } 00:17:51.305 } 00:17:51.305 ]' 00:17:51.305 10:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.305 10:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:51.305 10:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.305 10:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:51.305 10:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.565 10:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.565 10:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.565 10:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.565 10:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NDNjYWM2NWMzMjY3ZjA5MGQ4ZTFjNWI4ZDkxOWQ4NjDxaRGm: --dhchap-ctrl-secret DHHC-1:02:NDc3NjRlZmVlM2YwYTI2ZWQ3MDZjMzg0MTZhNjE5ZmZiMTEwOWM0MTBlMjdlMGM2+7Rj5g==: 00:17:52.137 10:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.398 10:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:52.398 10:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.399 10:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.399 10:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.399 10:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.399 10:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:52.399 10:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:52.399 10:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:17:52.399 10:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:52.399 10:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:52.399 10:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:52.399 10:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:52.399 10:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.399 10:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.399 10:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.399 10:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.399 10:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.399 10:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.399 10:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.659 00:17:52.659 10:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.659 10:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.659 10:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.920 10:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.920 10:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.920 10:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.920 10:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.920 10:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.920 10:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.920 { 00:17:52.920 "cntlid": 21, 00:17:52.920 "qid": 0, 00:17:52.920 "state": "enabled", 00:17:52.920 "thread": "nvmf_tgt_poll_group_000", 00:17:52.920 "listen_address": { 00:17:52.920 "trtype": "TCP", 00:17:52.920 "adrfam": "IPv4", 00:17:52.920 "traddr": "10.0.0.2", 00:17:52.920 "trsvcid": "4420" 00:17:52.920 }, 00:17:52.920 "peer_address": { 00:17:52.920 "trtype": "TCP", 00:17:52.920 "adrfam": "IPv4", 00:17:52.920 "traddr": "10.0.0.1", 00:17:52.920 "trsvcid": "37286" 00:17:52.920 }, 00:17:52.920 "auth": { 00:17:52.920 "state": "completed", 00:17:52.920 "digest": "sha256", 00:17:52.920 "dhgroup": "ffdhe3072" 00:17:52.920 } 00:17:52.920 } 00:17:52.920 ]' 00:17:52.920 10:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.920 10:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:52.920 10:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.920 10:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:52.920 10:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.920 10:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.920 10:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.920 10:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.181 10:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NTg1MjJmMDc2NmNkZWVmMzczMGM0N2VmNTc0NDVhZDAyMmVlYzUwN2E2ZGMyNTc1dRKDRw==: --dhchap-ctrl-secret DHHC-1:01:OTJkMzVlNjQyMTJlZDk1Mzc2ZjMxNWQ3YWE3ODIyYzccR8gb: 00:17:53.752 10:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.752 10:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:53.752 10:56:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.752 10:56:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.752 10:56:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.752 10:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.752 10:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:53.752 10:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:54.013 10:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:17:54.013 10:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.013 10:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:54.013 10:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:54.013 10:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:54.013 10:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.013 10:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:54.013 10:56:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.013 10:56:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.013 10:56:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.013 10:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:54.013 10:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:54.274 00:17:54.274 10:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.274 10:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.274 10:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.534 10:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.534 10:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.534 10:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.534 10:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.534 10:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.534 10:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.534 { 00:17:54.534 "cntlid": 23, 00:17:54.534 "qid": 0, 00:17:54.534 "state": "enabled", 00:17:54.534 "thread": "nvmf_tgt_poll_group_000", 00:17:54.535 "listen_address": { 00:17:54.535 "trtype": "TCP", 00:17:54.535 "adrfam": "IPv4", 00:17:54.535 "traddr": "10.0.0.2", 00:17:54.535 "trsvcid": "4420" 00:17:54.535 }, 00:17:54.535 "peer_address": { 00:17:54.535 "trtype": "TCP", 00:17:54.535 "adrfam": "IPv4", 00:17:54.535 "traddr": "10.0.0.1", 00:17:54.535 "trsvcid": "37320" 00:17:54.535 }, 00:17:54.535 "auth": { 00:17:54.535 "state": "completed", 00:17:54.535 "digest": "sha256", 00:17:54.535 "dhgroup": "ffdhe3072" 00:17:54.535 } 00:17:54.535 } 00:17:54.535 ]' 00:17:54.535 10:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.535 10:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:54.535 10:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.535 10:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:54.535 10:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.535 10:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.535 10:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.535 10:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.795 10:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NjkzZWY4NDFkYTc2NzQzYWEwMWMxYTk5NjIxNmZlMjBiOTY4NWQ1OGFmNmE0MGFmYTY4ZTM1NGUwY2JmMjA1MSQUGYw=: 00:17:55.368 10:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.368 10:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:55.368 10:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.368 10:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.368 10:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.368 10:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:55.368 10:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.368 10:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:55.368 10:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:55.629 10:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:17:55.629 10:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:55.629 10:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:55.629 10:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:55.629 10:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:55.629 10:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.629 10:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.629 10:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.629 10:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.629 10:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.629 10:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.629 10:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.890 00:17:55.890 10:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.890 10:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.890 10:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.890 10:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.890 10:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.890 10:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.890 10:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.890 10:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.890 10:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:55.890 { 00:17:55.890 "cntlid": 25, 00:17:55.890 "qid": 0, 00:17:55.890 "state": "enabled", 00:17:55.890 "thread": "nvmf_tgt_poll_group_000", 00:17:55.890 "listen_address": { 00:17:55.890 "trtype": "TCP", 00:17:55.890 "adrfam": "IPv4", 00:17:55.890 "traddr": "10.0.0.2", 00:17:55.890 "trsvcid": "4420" 00:17:55.890 }, 00:17:55.890 "peer_address": { 00:17:55.890 "trtype": "TCP", 00:17:55.890 "adrfam": "IPv4", 00:17:55.890 "traddr": "10.0.0.1", 00:17:55.890 "trsvcid": "37344" 00:17:55.890 }, 00:17:55.890 "auth": { 00:17:55.890 "state": "completed", 00:17:55.890 "digest": "sha256", 00:17:55.890 "dhgroup": "ffdhe4096" 00:17:55.890 } 00:17:55.890 } 00:17:55.890 ]' 00:17:55.890 10:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.151 10:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:56.151 10:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.151 10:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:56.151 10:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.151 10:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.151 10:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.151 10:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.412 10:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDQ2ZmNmYWFmNDU0ZjJlNmMwZTU5N2IxODA4ZGZmMWFmZGU5ZGUwOWI3NGEzZjcztUvQ6w==: --dhchap-ctrl-secret DHHC-1:03:MWYzOGY1NTVkZWMxOWJlZTA5M2VjMWU3NWU2MDg3YjFkODFlMGQ3ZTlmNmVkYmRhYWUyMTc0ZmE0YjBjOGNhM2ZIXgs=: 00:17:56.982 10:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.982 10:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:56.982 10:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.982 10:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.982 10:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.982 10:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.982 10:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:56.982 10:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:57.243 10:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:17:57.243 10:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.243 10:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:57.243 10:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:57.243 10:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:57.243 10:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.243 10:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.243 10:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.243 10:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.243 10:56:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.243 10:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.243 10:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.503 00:17:57.503 10:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.503 10:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:57.503 10:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.503 10:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.503 10:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.503 10:56:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.503 10:56:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.503 10:56:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.503 10:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:57.503 { 00:17:57.503 "cntlid": 27, 00:17:57.503 "qid": 0, 00:17:57.503 "state": "enabled", 00:17:57.503 "thread": "nvmf_tgt_poll_group_000", 00:17:57.503 "listen_address": { 00:17:57.503 "trtype": "TCP", 00:17:57.503 "adrfam": "IPv4", 00:17:57.504 "traddr": "10.0.0.2", 00:17:57.504 "trsvcid": "4420" 00:17:57.504 }, 00:17:57.504 "peer_address": { 00:17:57.504 "trtype": "TCP", 00:17:57.504 "adrfam": "IPv4", 00:17:57.504 "traddr": "10.0.0.1", 00:17:57.504 "trsvcid": "37364" 00:17:57.504 }, 00:17:57.504 "auth": { 00:17:57.504 "state": "completed", 00:17:57.504 "digest": "sha256", 00:17:57.504 "dhgroup": "ffdhe4096" 00:17:57.504 } 00:17:57.504 } 00:17:57.504 ]' 00:17:57.504 10:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:57.504 10:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:57.504 10:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.764 10:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:57.764 10:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.764 10:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.764 10:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.764 10:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.025 10:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NDNjYWM2NWMzMjY3ZjA5MGQ4ZTFjNWI4ZDkxOWQ4NjDxaRGm: --dhchap-ctrl-secret DHHC-1:02:NDc3NjRlZmVlM2YwYTI2ZWQ3MDZjMzg0MTZhNjE5ZmZiMTEwOWM0MTBlMjdlMGM2+7Rj5g==: 00:17:58.596 10:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.596 10:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:58.596 10:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.596 10:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.596 10:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.596 10:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.596 10:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:58.596 10:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:58.596 10:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:17:58.596 10:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:58.596 10:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:58.596 10:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:58.597 10:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:58.597 10:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.597 10:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.597 10:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.597 10:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.857 10:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.857 10:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.857 10:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.857 00:17:58.857 10:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.857 10:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.857 10:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.118 10:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.118 10:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.118 10:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.118 10:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.118 10:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.118 10:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.118 { 00:17:59.118 "cntlid": 29, 00:17:59.118 "qid": 0, 00:17:59.118 "state": "enabled", 00:17:59.118 "thread": "nvmf_tgt_poll_group_000", 00:17:59.118 "listen_address": { 00:17:59.118 "trtype": "TCP", 00:17:59.118 "adrfam": "IPv4", 00:17:59.118 "traddr": "10.0.0.2", 00:17:59.118 "trsvcid": "4420" 00:17:59.118 }, 00:17:59.118 "peer_address": { 00:17:59.118 "trtype": "TCP", 00:17:59.118 "adrfam": "IPv4", 00:17:59.118 "traddr": "10.0.0.1", 00:17:59.118 "trsvcid": "37394" 00:17:59.118 }, 00:17:59.118 "auth": { 00:17:59.118 "state": "completed", 00:17:59.118 "digest": "sha256", 00:17:59.118 "dhgroup": "ffdhe4096" 00:17:59.118 } 00:17:59.118 } 00:17:59.118 ]' 00:17:59.118 10:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.118 10:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:59.118 10:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.118 10:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:59.118 10:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.379 10:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.379 10:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.379 10:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.379 10:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NTg1MjJmMDc2NmNkZWVmMzczMGM0N2VmNTc0NDVhZDAyMmVlYzUwN2E2ZGMyNTc1dRKDRw==: --dhchap-ctrl-secret DHHC-1:01:OTJkMzVlNjQyMTJlZDk1Mzc2ZjMxNWQ3YWE3ODIyYzccR8gb: 00:17:59.950 10:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.212 10:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:00.212 10:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.212 10:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.212 10:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.212 10:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.212 10:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:00.212 10:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:00.212 10:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:00.212 10:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.212 10:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:00.212 10:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:00.212 10:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:00.212 10:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.212 10:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:00.212 10:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.212 10:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.212 10:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.212 10:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:00.212 10:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:00.474 00:18:00.474 10:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.474 10:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.474 10:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.736 10:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.736 10:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.736 10:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.736 10:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.736 10:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.736 10:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.736 { 00:18:00.736 "cntlid": 31, 00:18:00.736 "qid": 0, 00:18:00.736 "state": "enabled", 00:18:00.736 "thread": "nvmf_tgt_poll_group_000", 00:18:00.736 "listen_address": { 00:18:00.736 "trtype": "TCP", 00:18:00.736 "adrfam": "IPv4", 00:18:00.736 "traddr": "10.0.0.2", 00:18:00.736 "trsvcid": "4420" 00:18:00.736 }, 00:18:00.736 "peer_address": { 00:18:00.736 "trtype": "TCP", 00:18:00.736 "adrfam": "IPv4", 00:18:00.736 "traddr": "10.0.0.1", 00:18:00.736 "trsvcid": "37422" 00:18:00.736 }, 00:18:00.736 "auth": { 00:18:00.736 "state": "completed", 00:18:00.736 "digest": "sha256", 00:18:00.736 "dhgroup": "ffdhe4096" 00:18:00.736 } 00:18:00.736 } 00:18:00.736 ]' 00:18:00.736 10:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.736 10:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:00.736 10:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.736 10:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:00.736 10:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.736 10:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.736 10:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.736 10:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.997 10:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NjkzZWY4NDFkYTc2NzQzYWEwMWMxYTk5NjIxNmZlMjBiOTY4NWQ1OGFmNmE0MGFmYTY4ZTM1NGUwY2JmMjA1MSQUGYw=: 00:18:01.568 10:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.568 10:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:01.568 10:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.568 10:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.568 10:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.568 10:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:01.568 10:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.568 10:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:01.568 10:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:01.829 10:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:01.829 10:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.829 10:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:01.829 10:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:01.829 10:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:01.829 10:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.829 10:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.829 10:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.829 10:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.829 10:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.829 10:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.829 10:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.090 00:18:02.090 10:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.090 10:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.090 10:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.350 10:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.350 10:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.350 10:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.350 10:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.350 10:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.350 10:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.350 { 00:18:02.350 "cntlid": 33, 00:18:02.351 "qid": 0, 00:18:02.351 "state": "enabled", 00:18:02.351 "thread": "nvmf_tgt_poll_group_000", 00:18:02.351 "listen_address": { 00:18:02.351 "trtype": "TCP", 00:18:02.351 "adrfam": "IPv4", 00:18:02.351 "traddr": "10.0.0.2", 00:18:02.351 "trsvcid": "4420" 00:18:02.351 }, 00:18:02.351 "peer_address": { 00:18:02.351 "trtype": "TCP", 00:18:02.351 "adrfam": "IPv4", 00:18:02.351 "traddr": "10.0.0.1", 00:18:02.351 "trsvcid": "60446" 00:18:02.351 }, 00:18:02.351 "auth": { 00:18:02.351 "state": "completed", 00:18:02.351 "digest": "sha256", 00:18:02.351 "dhgroup": "ffdhe6144" 00:18:02.351 } 00:18:02.351 } 00:18:02.351 ]' 00:18:02.351 10:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.351 10:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:02.351 10:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.351 10:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:02.351 10:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.611 10:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.611 10:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.611 10:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.611 10:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDQ2ZmNmYWFmNDU0ZjJlNmMwZTU5N2IxODA4ZGZmMWFmZGU5ZGUwOWI3NGEzZjcztUvQ6w==: --dhchap-ctrl-secret DHHC-1:03:MWYzOGY1NTVkZWMxOWJlZTA5M2VjMWU3NWU2MDg3YjFkODFlMGQ3ZTlmNmVkYmRhYWUyMTc0ZmE0YjBjOGNhM2ZIXgs=: 00:18:03.553 10:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.553 10:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:03.553 10:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.553 10:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.553 10:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.553 10:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.553 10:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:03.554 10:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:03.554 10:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:03.554 10:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.554 10:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:03.554 10:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:03.554 10:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:03.554 10:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.554 10:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.554 10:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.554 10:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.554 10:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.554 10:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.554 10:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.814 00:18:03.814 10:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.814 10:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.814 10:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.073 10:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.073 10:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.073 10:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.073 10:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.073 10:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.073 10:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.073 { 00:18:04.073 "cntlid": 35, 00:18:04.073 "qid": 0, 00:18:04.073 "state": "enabled", 00:18:04.073 "thread": "nvmf_tgt_poll_group_000", 00:18:04.073 "listen_address": { 00:18:04.073 "trtype": "TCP", 00:18:04.073 "adrfam": "IPv4", 00:18:04.073 "traddr": "10.0.0.2", 00:18:04.073 "trsvcid": "4420" 00:18:04.073 }, 00:18:04.073 "peer_address": { 00:18:04.073 "trtype": "TCP", 00:18:04.073 "adrfam": "IPv4", 00:18:04.073 "traddr": "10.0.0.1", 00:18:04.073 "trsvcid": "60490" 00:18:04.073 }, 00:18:04.073 "auth": { 00:18:04.073 "state": "completed", 00:18:04.073 "digest": "sha256", 00:18:04.073 "dhgroup": "ffdhe6144" 00:18:04.073 } 00:18:04.073 } 00:18:04.073 ]' 00:18:04.073 10:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.073 10:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:04.073 10:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.074 10:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:04.074 10:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.074 10:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.074 10:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.074 10:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.333 10:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NDNjYWM2NWMzMjY3ZjA5MGQ4ZTFjNWI4ZDkxOWQ4NjDxaRGm: --dhchap-ctrl-secret DHHC-1:02:NDc3NjRlZmVlM2YwYTI2ZWQ3MDZjMzg0MTZhNjE5ZmZiMTEwOWM0MTBlMjdlMGM2+7Rj5g==: 00:18:04.903 10:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.903 10:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:04.903 10:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.903 10:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.903 10:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.903 10:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.903 10:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:04.903 10:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:05.163 10:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:05.163 10:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.163 10:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:05.163 10:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:05.163 10:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:05.163 10:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.163 10:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.163 10:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.163 10:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.163 10:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.163 10:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.163 10:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.423 00:18:05.423 10:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.423 10:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.423 10:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.683 10:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.683 10:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.683 10:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.683 10:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.683 10:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.683 10:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.683 { 00:18:05.683 "cntlid": 37, 00:18:05.683 "qid": 0, 00:18:05.683 "state": "enabled", 00:18:05.683 "thread": "nvmf_tgt_poll_group_000", 00:18:05.683 "listen_address": { 00:18:05.683 "trtype": "TCP", 00:18:05.683 "adrfam": "IPv4", 00:18:05.683 "traddr": "10.0.0.2", 00:18:05.683 "trsvcid": "4420" 00:18:05.683 }, 00:18:05.683 "peer_address": { 00:18:05.683 "trtype": "TCP", 00:18:05.683 "adrfam": "IPv4", 00:18:05.683 "traddr": "10.0.0.1", 00:18:05.683 "trsvcid": "60506" 00:18:05.683 }, 00:18:05.683 "auth": { 00:18:05.683 "state": "completed", 00:18:05.683 "digest": "sha256", 00:18:05.683 "dhgroup": "ffdhe6144" 00:18:05.683 } 00:18:05.683 } 00:18:05.683 ]' 00:18:05.683 10:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.683 10:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:05.683 10:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.683 10:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:05.683 10:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.943 10:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.943 10:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.943 10:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.943 10:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NTg1MjJmMDc2NmNkZWVmMzczMGM0N2VmNTc0NDVhZDAyMmVlYzUwN2E2ZGMyNTc1dRKDRw==: --dhchap-ctrl-secret DHHC-1:01:OTJkMzVlNjQyMTJlZDk1Mzc2ZjMxNWQ3YWE3ODIyYzccR8gb: 00:18:06.518 10:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.779 10:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:06.779 10:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.779 10:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.779 10:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.779 10:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.779 10:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:06.779 10:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:06.779 10:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:18:06.779 10:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.779 10:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:06.779 10:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:06.779 10:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:06.779 10:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.779 10:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:06.779 10:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.779 10:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.779 10:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.779 10:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.779 10:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:07.039 00:18:07.317 10:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.317 10:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.317 10:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.317 10:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.317 10:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.317 10:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.317 10:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.317 10:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.317 10:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.317 { 00:18:07.317 "cntlid": 39, 00:18:07.317 "qid": 0, 00:18:07.317 "state": "enabled", 00:18:07.317 "thread": "nvmf_tgt_poll_group_000", 00:18:07.317 "listen_address": { 00:18:07.317 "trtype": "TCP", 00:18:07.317 "adrfam": "IPv4", 00:18:07.317 "traddr": "10.0.0.2", 00:18:07.317 "trsvcid": "4420" 00:18:07.317 }, 00:18:07.317 "peer_address": { 00:18:07.317 "trtype": "TCP", 00:18:07.317 "adrfam": "IPv4", 00:18:07.317 "traddr": "10.0.0.1", 00:18:07.317 "trsvcid": "60526" 00:18:07.317 }, 00:18:07.317 "auth": { 00:18:07.317 "state": "completed", 00:18:07.317 "digest": "sha256", 00:18:07.317 "dhgroup": "ffdhe6144" 00:18:07.317 } 00:18:07.317 } 00:18:07.317 ]' 00:18:07.317 10:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.318 10:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:07.318 10:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.318 10:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:07.318 10:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.631 10:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.631 10:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.631 10:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.631 10:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NjkzZWY4NDFkYTc2NzQzYWEwMWMxYTk5NjIxNmZlMjBiOTY4NWQ1OGFmNmE0MGFmYTY4ZTM1NGUwY2JmMjA1MSQUGYw=: 00:18:08.203 10:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.203 10:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:08.203 10:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.203 10:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.203 10:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.203 10:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:08.203 10:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.204 10:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:08.204 10:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:08.464 10:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:18:08.464 10:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.464 10:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:08.464 10:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:08.464 10:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:08.464 10:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.464 10:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.464 10:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.464 10:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.464 10:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.464 10:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.464 10:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.036 00:18:09.036 10:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.036 10:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.036 10:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.036 10:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.036 10:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.036 10:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.036 10:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.036 10:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.036 10:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.036 { 00:18:09.036 "cntlid": 41, 00:18:09.036 "qid": 0, 00:18:09.036 "state": "enabled", 00:18:09.036 "thread": "nvmf_tgt_poll_group_000", 00:18:09.036 "listen_address": { 00:18:09.036 "trtype": "TCP", 00:18:09.036 "adrfam": "IPv4", 00:18:09.036 "traddr": "10.0.0.2", 00:18:09.036 "trsvcid": "4420" 00:18:09.036 }, 00:18:09.036 "peer_address": { 00:18:09.036 "trtype": "TCP", 00:18:09.036 "adrfam": "IPv4", 00:18:09.036 "traddr": "10.0.0.1", 00:18:09.036 "trsvcid": "60554" 00:18:09.036 }, 00:18:09.036 "auth": { 00:18:09.036 "state": "completed", 00:18:09.036 "digest": "sha256", 00:18:09.036 "dhgroup": "ffdhe8192" 00:18:09.036 } 00:18:09.036 } 00:18:09.036 ]' 00:18:09.036 10:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.036 10:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:09.036 10:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.297 10:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:09.297 10:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.297 10:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.297 10:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.297 10:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.297 10:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDQ2ZmNmYWFmNDU0ZjJlNmMwZTU5N2IxODA4ZGZmMWFmZGU5ZGUwOWI3NGEzZjcztUvQ6w==: --dhchap-ctrl-secret DHHC-1:03:MWYzOGY1NTVkZWMxOWJlZTA5M2VjMWU3NWU2MDg3YjFkODFlMGQ3ZTlmNmVkYmRhYWUyMTc0ZmE0YjBjOGNhM2ZIXgs=: 00:18:10.239 10:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.239 10:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:10.239 10:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.239 10:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.239 10:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.239 10:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:10.239 10:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:10.239 10:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:10.239 10:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:18:10.239 10:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.239 10:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:10.239 10:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:10.239 10:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:10.239 10:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.239 10:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.239 10:56:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.239 10:56:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.239 10:56:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.239 10:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.239 10:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.811 00:18:10.811 10:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.811 10:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.811 10:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.811 10:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.811 10:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.811 10:56:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.811 10:56:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.811 10:56:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.811 10:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.811 { 00:18:10.811 "cntlid": 43, 00:18:10.811 "qid": 0, 00:18:10.811 "state": "enabled", 00:18:10.811 "thread": "nvmf_tgt_poll_group_000", 00:18:10.811 "listen_address": { 00:18:10.811 "trtype": "TCP", 00:18:10.811 "adrfam": "IPv4", 00:18:10.811 "traddr": "10.0.0.2", 00:18:10.811 "trsvcid": "4420" 00:18:10.811 }, 00:18:10.811 "peer_address": { 00:18:10.811 "trtype": "TCP", 00:18:10.811 "adrfam": "IPv4", 00:18:10.811 "traddr": "10.0.0.1", 00:18:10.811 "trsvcid": "60580" 00:18:10.811 }, 00:18:10.811 "auth": { 00:18:10.811 "state": "completed", 00:18:10.811 "digest": "sha256", 00:18:10.811 "dhgroup": "ffdhe8192" 00:18:10.811 } 00:18:10.811 } 00:18:10.811 ]' 00:18:10.811 10:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.811 10:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:10.811 10:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.072 10:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:11.072 10:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.072 10:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.072 10:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.072 10:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.333 10:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NDNjYWM2NWMzMjY3ZjA5MGQ4ZTFjNWI4ZDkxOWQ4NjDxaRGm: --dhchap-ctrl-secret DHHC-1:02:NDc3NjRlZmVlM2YwYTI2ZWQ3MDZjMzg0MTZhNjE5ZmZiMTEwOWM0MTBlMjdlMGM2+7Rj5g==: 00:18:11.902 10:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.902 10:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:11.902 10:56:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.902 10:56:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.902 10:56:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.902 10:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.902 10:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:11.902 10:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:12.163 10:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:12.163 10:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:12.163 10:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:12.163 10:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:12.163 10:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:12.163 10:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.163 10:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.163 10:56:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.163 10:56:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.163 10:56:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.163 10:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.163 10:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.736 00:18:12.736 10:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.736 10:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.736 10:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.736 10:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.736 10:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.736 10:56:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.736 10:56:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.736 10:56:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.736 10:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.736 { 00:18:12.736 "cntlid": 45, 00:18:12.736 "qid": 0, 00:18:12.736 "state": "enabled", 00:18:12.736 "thread": "nvmf_tgt_poll_group_000", 00:18:12.736 "listen_address": { 00:18:12.736 "trtype": "TCP", 00:18:12.736 "adrfam": "IPv4", 00:18:12.736 "traddr": "10.0.0.2", 00:18:12.736 "trsvcid": "4420" 00:18:12.736 }, 00:18:12.736 "peer_address": { 00:18:12.736 "trtype": "TCP", 00:18:12.736 "adrfam": "IPv4", 00:18:12.736 "traddr": "10.0.0.1", 00:18:12.736 "trsvcid": "56638" 00:18:12.736 }, 00:18:12.736 "auth": { 00:18:12.736 "state": "completed", 00:18:12.736 "digest": "sha256", 00:18:12.736 "dhgroup": "ffdhe8192" 00:18:12.736 } 00:18:12.736 } 00:18:12.736 ]' 00:18:12.736 10:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.736 10:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:12.736 10:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.997 10:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:12.997 10:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.997 10:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.997 10:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.997 10:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.997 10:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NTg1MjJmMDc2NmNkZWVmMzczMGM0N2VmNTc0NDVhZDAyMmVlYzUwN2E2ZGMyNTc1dRKDRw==: --dhchap-ctrl-secret DHHC-1:01:OTJkMzVlNjQyMTJlZDk1Mzc2ZjMxNWQ3YWE3ODIyYzccR8gb: 00:18:13.940 10:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.940 10:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:13.940 10:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.940 10:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.940 10:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.940 10:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.940 10:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:13.940 10:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:13.940 10:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:13.940 10:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.940 10:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:13.940 10:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:13.940 10:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:13.940 10:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.940 10:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:13.940 10:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.940 10:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.940 10:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.940 10:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:13.940 10:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:14.511 00:18:14.511 10:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.511 10:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.511 10:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.511 10:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.511 10:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.511 10:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.511 10:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.511 10:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.511 10:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.511 { 00:18:14.511 "cntlid": 47, 00:18:14.511 "qid": 0, 00:18:14.511 "state": "enabled", 00:18:14.511 "thread": "nvmf_tgt_poll_group_000", 00:18:14.511 "listen_address": { 00:18:14.511 "trtype": "TCP", 00:18:14.511 "adrfam": "IPv4", 00:18:14.511 "traddr": "10.0.0.2", 00:18:14.511 "trsvcid": "4420" 00:18:14.511 }, 00:18:14.511 "peer_address": { 00:18:14.511 "trtype": "TCP", 00:18:14.511 "adrfam": "IPv4", 00:18:14.511 "traddr": "10.0.0.1", 00:18:14.511 "trsvcid": "56666" 00:18:14.511 }, 00:18:14.511 "auth": { 00:18:14.511 "state": "completed", 00:18:14.511 "digest": "sha256", 00:18:14.511 "dhgroup": "ffdhe8192" 00:18:14.511 } 00:18:14.511 } 00:18:14.511 ]' 00:18:14.511 10:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.771 10:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:14.772 10:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.772 10:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:14.772 10:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:14.772 10:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.772 10:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.772 10:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.032 10:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NjkzZWY4NDFkYTc2NzQzYWEwMWMxYTk5NjIxNmZlMjBiOTY4NWQ1OGFmNmE0MGFmYTY4ZTM1NGUwY2JmMjA1MSQUGYw=: 00:18:15.626 10:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.626 10:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:15.626 10:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.626 10:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.626 10:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.626 10:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:15.626 10:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:15.626 10:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.626 10:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:15.626 10:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:15.626 10:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:15.626 10:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.626 10:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:15.626 10:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:15.626 10:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:15.626 10:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.626 10:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.626 10:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.626 10:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.626 10:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.626 10:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.626 10:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.887 00:18:15.887 10:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.887 10:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.887 10:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.147 10:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.147 10:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.147 10:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.147 10:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.147 10:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.147 10:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.147 { 00:18:16.147 "cntlid": 49, 00:18:16.147 "qid": 0, 00:18:16.147 "state": "enabled", 00:18:16.147 "thread": "nvmf_tgt_poll_group_000", 00:18:16.147 "listen_address": { 00:18:16.147 "trtype": "TCP", 00:18:16.147 "adrfam": "IPv4", 00:18:16.147 "traddr": "10.0.0.2", 00:18:16.147 "trsvcid": "4420" 00:18:16.147 }, 00:18:16.147 "peer_address": { 00:18:16.147 "trtype": "TCP", 00:18:16.147 "adrfam": "IPv4", 00:18:16.147 "traddr": "10.0.0.1", 00:18:16.147 "trsvcid": "56700" 00:18:16.147 }, 00:18:16.147 "auth": { 00:18:16.147 "state": "completed", 00:18:16.147 "digest": "sha384", 00:18:16.147 "dhgroup": "null" 00:18:16.147 } 00:18:16.147 } 00:18:16.147 ]' 00:18:16.147 10:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.147 10:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:16.147 10:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.147 10:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:16.148 10:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.148 10:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.148 10:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.148 10:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.407 10:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDQ2ZmNmYWFmNDU0ZjJlNmMwZTU5N2IxODA4ZGZmMWFmZGU5ZGUwOWI3NGEzZjcztUvQ6w==: --dhchap-ctrl-secret DHHC-1:03:MWYzOGY1NTVkZWMxOWJlZTA5M2VjMWU3NWU2MDg3YjFkODFlMGQ3ZTlmNmVkYmRhYWUyMTc0ZmE0YjBjOGNhM2ZIXgs=: 00:18:16.977 10:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.977 10:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:16.977 10:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.977 10:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.977 10:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.977 10:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.977 10:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:16.977 10:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:17.237 10:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:17.237 10:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.237 10:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:17.237 10:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:17.237 10:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:17.237 10:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.237 10:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.237 10:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.237 10:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.237 10:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.237 10:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.237 10:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.496 00:18:17.496 10:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.496 10:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.496 10:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.756 10:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.756 10:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.756 10:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.756 10:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.756 10:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.756 10:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.756 { 00:18:17.756 "cntlid": 51, 00:18:17.756 "qid": 0, 00:18:17.756 "state": "enabled", 00:18:17.756 "thread": "nvmf_tgt_poll_group_000", 00:18:17.756 "listen_address": { 00:18:17.756 "trtype": "TCP", 00:18:17.756 "adrfam": "IPv4", 00:18:17.756 "traddr": "10.0.0.2", 00:18:17.756 "trsvcid": "4420" 00:18:17.756 }, 00:18:17.756 "peer_address": { 00:18:17.756 "trtype": "TCP", 00:18:17.756 "adrfam": "IPv4", 00:18:17.756 "traddr": "10.0.0.1", 00:18:17.756 "trsvcid": "56734" 00:18:17.756 }, 00:18:17.756 "auth": { 00:18:17.756 "state": "completed", 00:18:17.756 "digest": "sha384", 00:18:17.756 "dhgroup": "null" 00:18:17.756 } 00:18:17.756 } 00:18:17.756 ]' 00:18:17.756 10:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.756 10:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:17.756 10:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.756 10:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:17.756 10:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.756 10:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.756 10:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.756 10:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.015 10:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NDNjYWM2NWMzMjY3ZjA5MGQ4ZTFjNWI4ZDkxOWQ4NjDxaRGm: --dhchap-ctrl-secret DHHC-1:02:NDc3NjRlZmVlM2YwYTI2ZWQ3MDZjMzg0MTZhNjE5ZmZiMTEwOWM0MTBlMjdlMGM2+7Rj5g==: 00:18:18.584 10:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.584 10:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:18.584 10:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.584 10:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.584 10:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.584 10:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.584 10:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:18.584 10:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:18.843 10:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:18.844 10:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.844 10:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:18.844 10:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:18.844 10:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:18.844 10:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.844 10:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.844 10:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.844 10:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.844 10:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.844 10:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.844 10:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.104 00:18:19.104 10:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.104 10:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.104 10:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.104 10:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.104 10:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.104 10:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.104 10:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.104 10:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.104 10:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.104 { 00:18:19.104 "cntlid": 53, 00:18:19.104 "qid": 0, 00:18:19.104 "state": "enabled", 00:18:19.104 "thread": "nvmf_tgt_poll_group_000", 00:18:19.104 "listen_address": { 00:18:19.104 "trtype": "TCP", 00:18:19.104 "adrfam": "IPv4", 00:18:19.104 "traddr": "10.0.0.2", 00:18:19.104 "trsvcid": "4420" 00:18:19.104 }, 00:18:19.104 "peer_address": { 00:18:19.104 "trtype": "TCP", 00:18:19.104 "adrfam": "IPv4", 00:18:19.104 "traddr": "10.0.0.1", 00:18:19.104 "trsvcid": "56758" 00:18:19.104 }, 00:18:19.104 "auth": { 00:18:19.104 "state": "completed", 00:18:19.104 "digest": "sha384", 00:18:19.104 "dhgroup": "null" 00:18:19.104 } 00:18:19.104 } 00:18:19.104 ]' 00:18:19.104 10:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.378 10:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:19.378 10:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.378 10:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:19.378 10:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.378 10:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.378 10:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.378 10:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.644 10:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NTg1MjJmMDc2NmNkZWVmMzczMGM0N2VmNTc0NDVhZDAyMmVlYzUwN2E2ZGMyNTc1dRKDRw==: --dhchap-ctrl-secret DHHC-1:01:OTJkMzVlNjQyMTJlZDk1Mzc2ZjMxNWQ3YWE3ODIyYzccR8gb: 00:18:20.215 10:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.215 10:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:20.215 10:56:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.215 10:56:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.215 10:56:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.215 10:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.215 10:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:20.215 10:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:20.215 10:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:20.215 10:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.215 10:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:20.215 10:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:20.215 10:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:20.215 10:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.215 10:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:20.215 10:56:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.215 10:56:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.475 10:56:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.475 10:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:20.475 10:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:20.475 00:18:20.475 10:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.475 10:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.475 10:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.736 10:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.736 10:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.736 10:56:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.736 10:56:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.736 10:56:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.736 10:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.736 { 00:18:20.736 "cntlid": 55, 00:18:20.736 "qid": 0, 00:18:20.736 "state": "enabled", 00:18:20.736 "thread": "nvmf_tgt_poll_group_000", 00:18:20.736 "listen_address": { 00:18:20.736 "trtype": "TCP", 00:18:20.736 "adrfam": "IPv4", 00:18:20.736 "traddr": "10.0.0.2", 00:18:20.736 "trsvcid": "4420" 00:18:20.736 }, 00:18:20.736 "peer_address": { 00:18:20.736 "trtype": "TCP", 00:18:20.736 "adrfam": "IPv4", 00:18:20.736 "traddr": "10.0.0.1", 00:18:20.736 "trsvcid": "56784" 00:18:20.736 }, 00:18:20.736 "auth": { 00:18:20.736 "state": "completed", 00:18:20.736 "digest": "sha384", 00:18:20.736 "dhgroup": "null" 00:18:20.736 } 00:18:20.736 } 00:18:20.736 ]' 00:18:20.736 10:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.736 10:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:20.736 10:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.736 10:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:20.736 10:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.737 10:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.737 10:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.737 10:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.998 10:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NjkzZWY4NDFkYTc2NzQzYWEwMWMxYTk5NjIxNmZlMjBiOTY4NWQ1OGFmNmE0MGFmYTY4ZTM1NGUwY2JmMjA1MSQUGYw=: 00:18:21.570 10:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.570 10:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:21.570 10:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.570 10:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.570 10:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.570 10:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:21.570 10:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.570 10:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:21.570 10:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:21.831 10:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:21.831 10:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.831 10:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:21.831 10:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:21.831 10:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:21.831 10:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.831 10:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.831 10:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.831 10:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.831 10:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.831 10:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.831 10:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.092 00:18:22.092 10:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.092 10:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.092 10:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.092 10:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.092 10:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.092 10:56:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.092 10:56:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.353 10:56:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.353 10:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.353 { 00:18:22.353 "cntlid": 57, 00:18:22.353 "qid": 0, 00:18:22.353 "state": "enabled", 00:18:22.353 "thread": "nvmf_tgt_poll_group_000", 00:18:22.353 "listen_address": { 00:18:22.353 "trtype": "TCP", 00:18:22.353 "adrfam": "IPv4", 00:18:22.353 "traddr": "10.0.0.2", 00:18:22.353 "trsvcid": "4420" 00:18:22.353 }, 00:18:22.353 "peer_address": { 00:18:22.353 "trtype": "TCP", 00:18:22.353 "adrfam": "IPv4", 00:18:22.353 "traddr": "10.0.0.1", 00:18:22.353 "trsvcid": "41438" 00:18:22.353 }, 00:18:22.353 "auth": { 00:18:22.353 "state": "completed", 00:18:22.353 "digest": "sha384", 00:18:22.353 "dhgroup": "ffdhe2048" 00:18:22.353 } 00:18:22.353 } 00:18:22.353 ]' 00:18:22.353 10:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.353 10:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:22.353 10:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.353 10:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:22.353 10:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.353 10:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.353 10:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.353 10:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.614 10:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDQ2ZmNmYWFmNDU0ZjJlNmMwZTU5N2IxODA4ZGZmMWFmZGU5ZGUwOWI3NGEzZjcztUvQ6w==: --dhchap-ctrl-secret DHHC-1:03:MWYzOGY1NTVkZWMxOWJlZTA5M2VjMWU3NWU2MDg3YjFkODFlMGQ3ZTlmNmVkYmRhYWUyMTc0ZmE0YjBjOGNhM2ZIXgs=: 00:18:23.186 10:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.186 10:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:23.186 10:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.186 10:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.186 10:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.186 10:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.186 10:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:23.186 10:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:23.447 10:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:23.447 10:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.447 10:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:23.447 10:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:23.447 10:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:23.447 10:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.447 10:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.447 10:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.447 10:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.447 10:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.447 10:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.447 10:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.709 00:18:23.709 10:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.709 10:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.709 10:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.709 10:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.709 10:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.709 10:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.709 10:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.709 10:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.709 10:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.709 { 00:18:23.709 "cntlid": 59, 00:18:23.709 "qid": 0, 00:18:23.709 "state": "enabled", 00:18:23.709 "thread": "nvmf_tgt_poll_group_000", 00:18:23.709 "listen_address": { 00:18:23.709 "trtype": "TCP", 00:18:23.709 "adrfam": "IPv4", 00:18:23.709 "traddr": "10.0.0.2", 00:18:23.709 "trsvcid": "4420" 00:18:23.709 }, 00:18:23.709 "peer_address": { 00:18:23.709 "trtype": "TCP", 00:18:23.709 "adrfam": "IPv4", 00:18:23.709 "traddr": "10.0.0.1", 00:18:23.709 "trsvcid": "41470" 00:18:23.709 }, 00:18:23.709 "auth": { 00:18:23.709 "state": "completed", 00:18:23.709 "digest": "sha384", 00:18:23.709 "dhgroup": "ffdhe2048" 00:18:23.709 } 00:18:23.709 } 00:18:23.709 ]' 00:18:23.709 10:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.970 10:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:23.970 10:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.970 10:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:23.970 10:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:23.970 10:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.970 10:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.970 10:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.230 10:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NDNjYWM2NWMzMjY3ZjA5MGQ4ZTFjNWI4ZDkxOWQ4NjDxaRGm: --dhchap-ctrl-secret DHHC-1:02:NDc3NjRlZmVlM2YwYTI2ZWQ3MDZjMzg0MTZhNjE5ZmZiMTEwOWM0MTBlMjdlMGM2+7Rj5g==: 00:18:24.801 10:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.801 10:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:24.801 10:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.801 10:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.801 10:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.801 10:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.802 10:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:24.802 10:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:25.062 10:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:25.062 10:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.062 10:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:25.062 10:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:25.062 10:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:25.062 10:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.062 10:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.062 10:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.062 10:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.062 10:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.062 10:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.062 10:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.062 00:18:25.062 10:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.062 10:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.062 10:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.323 10:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.323 10:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.323 10:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.323 10:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.323 10:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.323 10:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.323 { 00:18:25.323 "cntlid": 61, 00:18:25.323 "qid": 0, 00:18:25.323 "state": "enabled", 00:18:25.323 "thread": "nvmf_tgt_poll_group_000", 00:18:25.323 "listen_address": { 00:18:25.323 "trtype": "TCP", 00:18:25.323 "adrfam": "IPv4", 00:18:25.323 "traddr": "10.0.0.2", 00:18:25.323 "trsvcid": "4420" 00:18:25.323 }, 00:18:25.323 "peer_address": { 00:18:25.323 "trtype": "TCP", 00:18:25.323 "adrfam": "IPv4", 00:18:25.323 "traddr": "10.0.0.1", 00:18:25.323 "trsvcid": "41490" 00:18:25.323 }, 00:18:25.323 "auth": { 00:18:25.323 "state": "completed", 00:18:25.323 "digest": "sha384", 00:18:25.323 "dhgroup": "ffdhe2048" 00:18:25.323 } 00:18:25.323 } 00:18:25.323 ]' 00:18:25.323 10:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.323 10:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:25.323 10:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:25.323 10:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:25.323 10:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.583 10:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.583 10:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.583 10:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.583 10:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NTg1MjJmMDc2NmNkZWVmMzczMGM0N2VmNTc0NDVhZDAyMmVlYzUwN2E2ZGMyNTc1dRKDRw==: --dhchap-ctrl-secret DHHC-1:01:OTJkMzVlNjQyMTJlZDk1Mzc2ZjMxNWQ3YWE3ODIyYzccR8gb: 00:18:26.154 10:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.415 10:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:26.415 10:56:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.415 10:56:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.415 10:56:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.415 10:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.415 10:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:26.415 10:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:26.415 10:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:26.415 10:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.415 10:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:26.415 10:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:26.415 10:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:26.415 10:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.415 10:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:26.415 10:56:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.415 10:56:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.415 10:56:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.415 10:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:26.415 10:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:26.675 00:18:26.675 10:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:26.675 10:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.675 10:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.935 10:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.935 10:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.935 10:56:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.935 10:56:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.935 10:56:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.935 10:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.935 { 00:18:26.935 "cntlid": 63, 00:18:26.935 "qid": 0, 00:18:26.935 "state": "enabled", 00:18:26.935 "thread": "nvmf_tgt_poll_group_000", 00:18:26.935 "listen_address": { 00:18:26.935 "trtype": "TCP", 00:18:26.935 "adrfam": "IPv4", 00:18:26.935 "traddr": "10.0.0.2", 00:18:26.935 "trsvcid": "4420" 00:18:26.935 }, 00:18:26.935 "peer_address": { 00:18:26.935 "trtype": "TCP", 00:18:26.935 "adrfam": "IPv4", 00:18:26.935 "traddr": "10.0.0.1", 00:18:26.935 "trsvcid": "41526" 00:18:26.935 }, 00:18:26.935 "auth": { 00:18:26.935 "state": "completed", 00:18:26.935 "digest": "sha384", 00:18:26.935 "dhgroup": "ffdhe2048" 00:18:26.935 } 00:18:26.935 } 00:18:26.935 ]' 00:18:26.935 10:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.935 10:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:26.935 10:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.935 10:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:26.935 10:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.935 10:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.935 10:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.935 10:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.196 10:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NjkzZWY4NDFkYTc2NzQzYWEwMWMxYTk5NjIxNmZlMjBiOTY4NWQ1OGFmNmE0MGFmYTY4ZTM1NGUwY2JmMjA1MSQUGYw=: 00:18:27.766 10:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.766 10:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:27.766 10:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.766 10:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.766 10:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.767 10:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:27.767 10:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.767 10:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:27.767 10:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:28.027 10:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:18:28.027 10:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.027 10:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:28.027 10:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:28.027 10:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:28.027 10:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.027 10:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.027 10:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.027 10:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.027 10:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.027 10:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.027 10:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.288 00:18:28.288 10:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.288 10:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.288 10:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.549 10:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.549 10:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.549 10:56:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.549 10:56:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.549 10:56:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.549 10:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.549 { 00:18:28.549 "cntlid": 65, 00:18:28.549 "qid": 0, 00:18:28.549 "state": "enabled", 00:18:28.549 "thread": "nvmf_tgt_poll_group_000", 00:18:28.549 "listen_address": { 00:18:28.549 "trtype": "TCP", 00:18:28.549 "adrfam": "IPv4", 00:18:28.549 "traddr": "10.0.0.2", 00:18:28.549 "trsvcid": "4420" 00:18:28.549 }, 00:18:28.549 "peer_address": { 00:18:28.549 "trtype": "TCP", 00:18:28.549 "adrfam": "IPv4", 00:18:28.549 "traddr": "10.0.0.1", 00:18:28.549 "trsvcid": "41542" 00:18:28.549 }, 00:18:28.549 "auth": { 00:18:28.549 "state": "completed", 00:18:28.549 "digest": "sha384", 00:18:28.549 "dhgroup": "ffdhe3072" 00:18:28.549 } 00:18:28.549 } 00:18:28.549 ]' 00:18:28.549 10:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.549 10:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:28.549 10:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.549 10:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:28.549 10:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.549 10:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.549 10:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.549 10:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.809 10:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDQ2ZmNmYWFmNDU0ZjJlNmMwZTU5N2IxODA4ZGZmMWFmZGU5ZGUwOWI3NGEzZjcztUvQ6w==: --dhchap-ctrl-secret DHHC-1:03:MWYzOGY1NTVkZWMxOWJlZTA5M2VjMWU3NWU2MDg3YjFkODFlMGQ3ZTlmNmVkYmRhYWUyMTc0ZmE0YjBjOGNhM2ZIXgs=: 00:18:29.409 10:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.409 10:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:29.409 10:56:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.409 10:56:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.409 10:56:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.409 10:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.409 10:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:29.409 10:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:29.669 10:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:18:29.669 10:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.669 10:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:29.670 10:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:29.670 10:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:29.670 10:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.670 10:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.670 10:56:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.670 10:56:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.670 10:56:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.670 10:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.670 10:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.670 00:18:29.670 10:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.670 10:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.670 10:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.930 10:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.930 10:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.930 10:56:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.930 10:56:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.930 10:56:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.930 10:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.930 { 00:18:29.930 "cntlid": 67, 00:18:29.930 "qid": 0, 00:18:29.930 "state": "enabled", 00:18:29.930 "thread": "nvmf_tgt_poll_group_000", 00:18:29.930 "listen_address": { 00:18:29.930 "trtype": "TCP", 00:18:29.930 "adrfam": "IPv4", 00:18:29.930 "traddr": "10.0.0.2", 00:18:29.930 "trsvcid": "4420" 00:18:29.930 }, 00:18:29.930 "peer_address": { 00:18:29.930 "trtype": "TCP", 00:18:29.930 "adrfam": "IPv4", 00:18:29.930 "traddr": "10.0.0.1", 00:18:29.930 "trsvcid": "41566" 00:18:29.930 }, 00:18:29.930 "auth": { 00:18:29.930 "state": "completed", 00:18:29.930 "digest": "sha384", 00:18:29.930 "dhgroup": "ffdhe3072" 00:18:29.930 } 00:18:29.930 } 00:18:29.930 ]' 00:18:29.930 10:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.930 10:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:29.930 10:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.190 10:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:30.190 10:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.190 10:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.190 10:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.190 10:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.190 10:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NDNjYWM2NWMzMjY3ZjA5MGQ4ZTFjNWI4ZDkxOWQ4NjDxaRGm: --dhchap-ctrl-secret DHHC-1:02:NDc3NjRlZmVlM2YwYTI2ZWQ3MDZjMzg0MTZhNjE5ZmZiMTEwOWM0MTBlMjdlMGM2+7Rj5g==: 00:18:31.131 10:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.131 10:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:31.131 10:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.131 10:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.131 10:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.131 10:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.131 10:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:31.131 10:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:31.131 10:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:18:31.131 10:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.131 10:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:31.131 10:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:31.132 10:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:31.132 10:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.132 10:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.132 10:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.132 10:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.132 10:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.132 10:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.132 10:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.391 00:18:31.391 10:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.391 10:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.391 10:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.391 10:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.391 10:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.391 10:56:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.391 10:56:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.652 10:56:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.652 10:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.652 { 00:18:31.652 "cntlid": 69, 00:18:31.652 "qid": 0, 00:18:31.652 "state": "enabled", 00:18:31.652 "thread": "nvmf_tgt_poll_group_000", 00:18:31.652 "listen_address": { 00:18:31.652 "trtype": "TCP", 00:18:31.652 "adrfam": "IPv4", 00:18:31.652 "traddr": "10.0.0.2", 00:18:31.652 "trsvcid": "4420" 00:18:31.652 }, 00:18:31.652 "peer_address": { 00:18:31.652 "trtype": "TCP", 00:18:31.652 "adrfam": "IPv4", 00:18:31.652 "traddr": "10.0.0.1", 00:18:31.652 "trsvcid": "56606" 00:18:31.652 }, 00:18:31.652 "auth": { 00:18:31.652 "state": "completed", 00:18:31.652 "digest": "sha384", 00:18:31.652 "dhgroup": "ffdhe3072" 00:18:31.652 } 00:18:31.652 } 00:18:31.652 ]' 00:18:31.652 10:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:31.652 10:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:31.652 10:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:31.652 10:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:31.652 10:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:31.652 10:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.652 10:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.652 10:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.918 10:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NTg1MjJmMDc2NmNkZWVmMzczMGM0N2VmNTc0NDVhZDAyMmVlYzUwN2E2ZGMyNTc1dRKDRw==: --dhchap-ctrl-secret DHHC-1:01:OTJkMzVlNjQyMTJlZDk1Mzc2ZjMxNWQ3YWE3ODIyYzccR8gb: 00:18:32.574 10:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.574 10:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:32.574 10:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.574 10:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.574 10:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.574 10:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:32.574 10:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:32.574 10:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:32.574 10:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:18:32.574 10:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:32.574 10:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:32.574 10:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:32.574 10:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:32.574 10:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.574 10:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:32.574 10:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.574 10:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.574 10:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.574 10:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:32.574 10:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:32.834 00:18:32.834 10:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.834 10:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.834 10:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.095 10:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.095 10:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.095 10:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.095 10:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.095 10:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.095 10:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.095 { 00:18:33.095 "cntlid": 71, 00:18:33.095 "qid": 0, 00:18:33.095 "state": "enabled", 00:18:33.095 "thread": "nvmf_tgt_poll_group_000", 00:18:33.095 "listen_address": { 00:18:33.095 "trtype": "TCP", 00:18:33.095 "adrfam": "IPv4", 00:18:33.095 "traddr": "10.0.0.2", 00:18:33.095 "trsvcid": "4420" 00:18:33.095 }, 00:18:33.095 "peer_address": { 00:18:33.095 "trtype": "TCP", 00:18:33.095 "adrfam": "IPv4", 00:18:33.095 "traddr": "10.0.0.1", 00:18:33.095 "trsvcid": "56626" 00:18:33.095 }, 00:18:33.095 "auth": { 00:18:33.095 "state": "completed", 00:18:33.095 "digest": "sha384", 00:18:33.095 "dhgroup": "ffdhe3072" 00:18:33.095 } 00:18:33.095 } 00:18:33.095 ]' 00:18:33.095 10:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:33.095 10:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:33.095 10:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:33.095 10:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:33.095 10:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:33.095 10:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.095 10:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.095 10:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.355 10:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NjkzZWY4NDFkYTc2NzQzYWEwMWMxYTk5NjIxNmZlMjBiOTY4NWQ1OGFmNmE0MGFmYTY4ZTM1NGUwY2JmMjA1MSQUGYw=: 00:18:33.926 10:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.926 10:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:33.926 10:56:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.926 10:56:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.926 10:56:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.926 10:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:33.926 10:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.926 10:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:33.926 10:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:34.186 10:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:18:34.186 10:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.186 10:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:34.186 10:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:34.186 10:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:34.186 10:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.186 10:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.186 10:56:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.186 10:56:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.186 10:56:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.186 10:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.186 10:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.446 00:18:34.446 10:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.446 10:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.446 10:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.707 10:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.707 10:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.707 10:56:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.707 10:56:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.707 10:56:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.707 10:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.707 { 00:18:34.707 "cntlid": 73, 00:18:34.707 "qid": 0, 00:18:34.707 "state": "enabled", 00:18:34.707 "thread": "nvmf_tgt_poll_group_000", 00:18:34.707 "listen_address": { 00:18:34.707 "trtype": "TCP", 00:18:34.707 "adrfam": "IPv4", 00:18:34.707 "traddr": "10.0.0.2", 00:18:34.707 "trsvcid": "4420" 00:18:34.707 }, 00:18:34.707 "peer_address": { 00:18:34.707 "trtype": "TCP", 00:18:34.707 "adrfam": "IPv4", 00:18:34.707 "traddr": "10.0.0.1", 00:18:34.707 "trsvcid": "56644" 00:18:34.707 }, 00:18:34.707 "auth": { 00:18:34.707 "state": "completed", 00:18:34.707 "digest": "sha384", 00:18:34.707 "dhgroup": "ffdhe4096" 00:18:34.707 } 00:18:34.707 } 00:18:34.707 ]' 00:18:34.707 10:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.707 10:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:34.707 10:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.707 10:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:34.707 10:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.707 10:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.707 10:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.707 10:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.968 10:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDQ2ZmNmYWFmNDU0ZjJlNmMwZTU5N2IxODA4ZGZmMWFmZGU5ZGUwOWI3NGEzZjcztUvQ6w==: --dhchap-ctrl-secret DHHC-1:03:MWYzOGY1NTVkZWMxOWJlZTA5M2VjMWU3NWU2MDg3YjFkODFlMGQ3ZTlmNmVkYmRhYWUyMTc0ZmE0YjBjOGNhM2ZIXgs=: 00:18:35.538 10:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.538 10:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:35.538 10:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.538 10:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.538 10:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.538 10:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.538 10:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:35.538 10:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:35.799 10:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:18:35.799 10:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.799 10:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:35.799 10:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:35.799 10:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:35.799 10:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.799 10:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.799 10:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.799 10:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.799 10:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.799 10:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.799 10:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.061 00:18:36.061 10:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.061 10:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.061 10:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.321 10:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.321 10:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.321 10:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.321 10:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.321 10:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.321 10:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.321 { 00:18:36.321 "cntlid": 75, 00:18:36.321 "qid": 0, 00:18:36.321 "state": "enabled", 00:18:36.321 "thread": "nvmf_tgt_poll_group_000", 00:18:36.321 "listen_address": { 00:18:36.321 "trtype": "TCP", 00:18:36.321 "adrfam": "IPv4", 00:18:36.321 "traddr": "10.0.0.2", 00:18:36.321 "trsvcid": "4420" 00:18:36.321 }, 00:18:36.321 "peer_address": { 00:18:36.321 "trtype": "TCP", 00:18:36.321 "adrfam": "IPv4", 00:18:36.321 "traddr": "10.0.0.1", 00:18:36.321 "trsvcid": "56682" 00:18:36.321 }, 00:18:36.321 "auth": { 00:18:36.321 "state": "completed", 00:18:36.321 "digest": "sha384", 00:18:36.321 "dhgroup": "ffdhe4096" 00:18:36.321 } 00:18:36.321 } 00:18:36.321 ]' 00:18:36.321 10:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.321 10:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:36.321 10:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.321 10:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:36.321 10:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.321 10:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.321 10:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.321 10:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.581 10:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NDNjYWM2NWMzMjY3ZjA5MGQ4ZTFjNWI4ZDkxOWQ4NjDxaRGm: --dhchap-ctrl-secret DHHC-1:02:NDc3NjRlZmVlM2YwYTI2ZWQ3MDZjMzg0MTZhNjE5ZmZiMTEwOWM0MTBlMjdlMGM2+7Rj5g==: 00:18:37.151 10:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.151 10:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.151 10:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.151 10:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.151 10:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.151 10:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.151 10:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:37.151 10:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:37.412 10:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:18:37.412 10:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:37.412 10:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:37.412 10:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:37.412 10:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:37.412 10:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.412 10:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.412 10:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.412 10:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.412 10:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.412 10:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.412 10:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.673 00:18:37.673 10:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.673 10:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.673 10:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.673 10:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.673 10:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.673 10:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.673 10:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.935 10:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.935 10:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.935 { 00:18:37.935 "cntlid": 77, 00:18:37.935 "qid": 0, 00:18:37.935 "state": "enabled", 00:18:37.935 "thread": "nvmf_tgt_poll_group_000", 00:18:37.935 "listen_address": { 00:18:37.935 "trtype": "TCP", 00:18:37.935 "adrfam": "IPv4", 00:18:37.935 "traddr": "10.0.0.2", 00:18:37.935 "trsvcid": "4420" 00:18:37.935 }, 00:18:37.935 "peer_address": { 00:18:37.935 "trtype": "TCP", 00:18:37.935 "adrfam": "IPv4", 00:18:37.935 "traddr": "10.0.0.1", 00:18:37.935 "trsvcid": "56706" 00:18:37.935 }, 00:18:37.935 "auth": { 00:18:37.935 "state": "completed", 00:18:37.935 "digest": "sha384", 00:18:37.935 "dhgroup": "ffdhe4096" 00:18:37.935 } 00:18:37.935 } 00:18:37.935 ]' 00:18:37.935 10:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.935 10:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:37.935 10:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.935 10:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:37.935 10:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.935 10:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.935 10:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.935 10:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.196 10:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NTg1MjJmMDc2NmNkZWVmMzczMGM0N2VmNTc0NDVhZDAyMmVlYzUwN2E2ZGMyNTc1dRKDRw==: --dhchap-ctrl-secret DHHC-1:01:OTJkMzVlNjQyMTJlZDk1Mzc2ZjMxNWQ3YWE3ODIyYzccR8gb: 00:18:38.768 10:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.768 10:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:38.768 10:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.768 10:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.768 10:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.768 10:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.768 10:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:38.768 10:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:39.029 10:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:18:39.029 10:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:39.029 10:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:39.029 10:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:39.029 10:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:39.029 10:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.029 10:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:39.029 10:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.029 10:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.029 10:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.029 10:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:39.029 10:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:39.290 00:18:39.290 10:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.290 10:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.290 10:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.290 10:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.290 10:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.290 10:56:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.290 10:56:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.290 10:56:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.290 10:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.290 { 00:18:39.290 "cntlid": 79, 00:18:39.290 "qid": 0, 00:18:39.290 "state": "enabled", 00:18:39.290 "thread": "nvmf_tgt_poll_group_000", 00:18:39.290 "listen_address": { 00:18:39.290 "trtype": "TCP", 00:18:39.290 "adrfam": "IPv4", 00:18:39.290 "traddr": "10.0.0.2", 00:18:39.290 "trsvcid": "4420" 00:18:39.290 }, 00:18:39.290 "peer_address": { 00:18:39.290 "trtype": "TCP", 00:18:39.290 "adrfam": "IPv4", 00:18:39.290 "traddr": "10.0.0.1", 00:18:39.290 "trsvcid": "56732" 00:18:39.290 }, 00:18:39.290 "auth": { 00:18:39.290 "state": "completed", 00:18:39.290 "digest": "sha384", 00:18:39.290 "dhgroup": "ffdhe4096" 00:18:39.290 } 00:18:39.290 } 00:18:39.290 ]' 00:18:39.290 10:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.551 10:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:39.551 10:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.551 10:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:39.551 10:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.551 10:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.551 10:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.551 10:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.551 10:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NjkzZWY4NDFkYTc2NzQzYWEwMWMxYTk5NjIxNmZlMjBiOTY4NWQ1OGFmNmE0MGFmYTY4ZTM1NGUwY2JmMjA1MSQUGYw=: 00:18:40.123 10:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.384 10:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:40.384 10:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.384 10:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.384 10:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.384 10:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:40.384 10:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.384 10:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:40.384 10:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:40.384 10:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:40.384 10:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.384 10:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:40.384 10:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:40.384 10:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:40.384 10:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.384 10:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.384 10:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.384 10:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.384 10:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.384 10:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.384 10:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.645 00:18:40.906 10:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.906 10:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.906 10:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.906 10:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.906 10:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.906 10:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.906 10:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.906 10:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.906 10:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.906 { 00:18:40.906 "cntlid": 81, 00:18:40.906 "qid": 0, 00:18:40.906 "state": "enabled", 00:18:40.906 "thread": "nvmf_tgt_poll_group_000", 00:18:40.906 "listen_address": { 00:18:40.906 "trtype": "TCP", 00:18:40.906 "adrfam": "IPv4", 00:18:40.906 "traddr": "10.0.0.2", 00:18:40.906 "trsvcid": "4420" 00:18:40.906 }, 00:18:40.906 "peer_address": { 00:18:40.906 "trtype": "TCP", 00:18:40.906 "adrfam": "IPv4", 00:18:40.906 "traddr": "10.0.0.1", 00:18:40.906 "trsvcid": "40168" 00:18:40.906 }, 00:18:40.906 "auth": { 00:18:40.906 "state": "completed", 00:18:40.906 "digest": "sha384", 00:18:40.906 "dhgroup": "ffdhe6144" 00:18:40.906 } 00:18:40.906 } 00:18:40.906 ]' 00:18:40.906 10:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.906 10:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:40.906 10:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.906 10:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:40.906 10:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.167 10:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.167 10:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.167 10:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.167 10:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDQ2ZmNmYWFmNDU0ZjJlNmMwZTU5N2IxODA4ZGZmMWFmZGU5ZGUwOWI3NGEzZjcztUvQ6w==: --dhchap-ctrl-secret DHHC-1:03:MWYzOGY1NTVkZWMxOWJlZTA5M2VjMWU3NWU2MDg3YjFkODFlMGQ3ZTlmNmVkYmRhYWUyMTc0ZmE0YjBjOGNhM2ZIXgs=: 00:18:42.108 10:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.108 10:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:42.108 10:56:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.108 10:56:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.108 10:56:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.108 10:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.108 10:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:42.108 10:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:42.108 10:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:42.108 10:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.108 10:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:42.108 10:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:42.108 10:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:42.108 10:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.108 10:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.108 10:56:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.108 10:56:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.108 10:56:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.108 10:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.108 10:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.369 00:18:42.369 10:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.369 10:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.369 10:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.629 10:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.629 10:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.629 10:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.629 10:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.629 10:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.629 10:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.629 { 00:18:42.629 "cntlid": 83, 00:18:42.629 "qid": 0, 00:18:42.629 "state": "enabled", 00:18:42.629 "thread": "nvmf_tgt_poll_group_000", 00:18:42.629 "listen_address": { 00:18:42.629 "trtype": "TCP", 00:18:42.629 "adrfam": "IPv4", 00:18:42.629 "traddr": "10.0.0.2", 00:18:42.629 "trsvcid": "4420" 00:18:42.630 }, 00:18:42.630 "peer_address": { 00:18:42.630 "trtype": "TCP", 00:18:42.630 "adrfam": "IPv4", 00:18:42.630 "traddr": "10.0.0.1", 00:18:42.630 "trsvcid": "40196" 00:18:42.630 }, 00:18:42.630 "auth": { 00:18:42.630 "state": "completed", 00:18:42.630 "digest": "sha384", 00:18:42.630 "dhgroup": "ffdhe6144" 00:18:42.630 } 00:18:42.630 } 00:18:42.630 ]' 00:18:42.630 10:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.630 10:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:42.630 10:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.630 10:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:42.630 10:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.630 10:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.630 10:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.630 10:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.891 10:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NDNjYWM2NWMzMjY3ZjA5MGQ4ZTFjNWI4ZDkxOWQ4NjDxaRGm: --dhchap-ctrl-secret DHHC-1:02:NDc3NjRlZmVlM2YwYTI2ZWQ3MDZjMzg0MTZhNjE5ZmZiMTEwOWM0MTBlMjdlMGM2+7Rj5g==: 00:18:43.462 10:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.462 10:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:43.462 10:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.462 10:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.462 10:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.462 10:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.462 10:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:43.462 10:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:43.723 10:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:43.723 10:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.723 10:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:43.723 10:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:43.723 10:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:43.723 10:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.723 10:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.723 10:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.723 10:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.723 10:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.723 10:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.723 10:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.984 00:18:43.984 10:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.984 10:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.984 10:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.245 10:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.245 10:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.245 10:57:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.245 10:57:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.245 10:57:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.245 10:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:44.245 { 00:18:44.245 "cntlid": 85, 00:18:44.245 "qid": 0, 00:18:44.245 "state": "enabled", 00:18:44.245 "thread": "nvmf_tgt_poll_group_000", 00:18:44.245 "listen_address": { 00:18:44.245 "trtype": "TCP", 00:18:44.245 "adrfam": "IPv4", 00:18:44.245 "traddr": "10.0.0.2", 00:18:44.245 "trsvcid": "4420" 00:18:44.245 }, 00:18:44.245 "peer_address": { 00:18:44.245 "trtype": "TCP", 00:18:44.245 "adrfam": "IPv4", 00:18:44.245 "traddr": "10.0.0.1", 00:18:44.245 "trsvcid": "40230" 00:18:44.245 }, 00:18:44.245 "auth": { 00:18:44.245 "state": "completed", 00:18:44.245 "digest": "sha384", 00:18:44.245 "dhgroup": "ffdhe6144" 00:18:44.245 } 00:18:44.245 } 00:18:44.245 ]' 00:18:44.245 10:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:44.245 10:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:44.245 10:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:44.245 10:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:44.245 10:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:44.245 10:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.245 10:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.245 10:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.505 10:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NTg1MjJmMDc2NmNkZWVmMzczMGM0N2VmNTc0NDVhZDAyMmVlYzUwN2E2ZGMyNTc1dRKDRw==: --dhchap-ctrl-secret DHHC-1:01:OTJkMzVlNjQyMTJlZDk1Mzc2ZjMxNWQ3YWE3ODIyYzccR8gb: 00:18:45.075 10:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.075 10:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:45.075 10:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.075 10:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.075 10:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.075 10:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:45.075 10:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:45.075 10:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:45.334 10:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:18:45.334 10:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.335 10:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:45.335 10:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:45.335 10:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:45.335 10:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.335 10:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:45.335 10:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.335 10:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.335 10:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.335 10:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.335 10:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.595 00:18:45.595 10:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.595 10:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.595 10:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.857 10:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.857 10:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.857 10:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.857 10:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.857 10:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.857 10:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.857 { 00:18:45.857 "cntlid": 87, 00:18:45.857 "qid": 0, 00:18:45.857 "state": "enabled", 00:18:45.857 "thread": "nvmf_tgt_poll_group_000", 00:18:45.857 "listen_address": { 00:18:45.857 "trtype": "TCP", 00:18:45.857 "adrfam": "IPv4", 00:18:45.857 "traddr": "10.0.0.2", 00:18:45.857 "trsvcid": "4420" 00:18:45.857 }, 00:18:45.857 "peer_address": { 00:18:45.857 "trtype": "TCP", 00:18:45.857 "adrfam": "IPv4", 00:18:45.857 "traddr": "10.0.0.1", 00:18:45.857 "trsvcid": "40256" 00:18:45.857 }, 00:18:45.857 "auth": { 00:18:45.857 "state": "completed", 00:18:45.857 "digest": "sha384", 00:18:45.857 "dhgroup": "ffdhe6144" 00:18:45.857 } 00:18:45.857 } 00:18:45.857 ]' 00:18:45.857 10:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.857 10:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:45.857 10:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.857 10:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:45.857 10:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.124 10:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.124 10:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.124 10:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.124 10:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NjkzZWY4NDFkYTc2NzQzYWEwMWMxYTk5NjIxNmZlMjBiOTY4NWQ1OGFmNmE0MGFmYTY4ZTM1NGUwY2JmMjA1MSQUGYw=: 00:18:46.696 10:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.696 10:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:46.696 10:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.696 10:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.696 10:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.696 10:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:46.696 10:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.696 10:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:46.696 10:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:46.956 10:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:18:46.956 10:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.956 10:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:46.956 10:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:46.956 10:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:46.956 10:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.956 10:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.956 10:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.956 10:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.956 10:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.956 10:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.956 10:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.528 00:18:47.528 10:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.528 10:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.528 10:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.528 10:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.528 10:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.528 10:57:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.528 10:57:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.528 10:57:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.528 10:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.528 { 00:18:47.528 "cntlid": 89, 00:18:47.528 "qid": 0, 00:18:47.528 "state": "enabled", 00:18:47.528 "thread": "nvmf_tgt_poll_group_000", 00:18:47.528 "listen_address": { 00:18:47.528 "trtype": "TCP", 00:18:47.528 "adrfam": "IPv4", 00:18:47.528 "traddr": "10.0.0.2", 00:18:47.528 "trsvcid": "4420" 00:18:47.528 }, 00:18:47.528 "peer_address": { 00:18:47.528 "trtype": "TCP", 00:18:47.528 "adrfam": "IPv4", 00:18:47.528 "traddr": "10.0.0.1", 00:18:47.528 "trsvcid": "40274" 00:18:47.528 }, 00:18:47.528 "auth": { 00:18:47.528 "state": "completed", 00:18:47.528 "digest": "sha384", 00:18:47.528 "dhgroup": "ffdhe8192" 00:18:47.528 } 00:18:47.528 } 00:18:47.528 ]' 00:18:47.528 10:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.788 10:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:47.788 10:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.788 10:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:47.788 10:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.788 10:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.788 10:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.788 10:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.049 10:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDQ2ZmNmYWFmNDU0ZjJlNmMwZTU5N2IxODA4ZGZmMWFmZGU5ZGUwOWI3NGEzZjcztUvQ6w==: --dhchap-ctrl-secret DHHC-1:03:MWYzOGY1NTVkZWMxOWJlZTA5M2VjMWU3NWU2MDg3YjFkODFlMGQ3ZTlmNmVkYmRhYWUyMTc0ZmE0YjBjOGNhM2ZIXgs=: 00:18:48.620 10:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.620 10:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:48.620 10:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.620 10:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.620 10:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.620 10:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.620 10:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:48.620 10:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:48.880 10:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:18:48.880 10:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.880 10:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:48.880 10:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:48.880 10:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:48.880 10:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.880 10:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.880 10:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.880 10:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.880 10:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.880 10:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.880 10:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.140 00:18:49.140 10:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.140 10:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.140 10:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.401 10:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.401 10:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.401 10:57:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.401 10:57:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.401 10:57:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.401 10:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.401 { 00:18:49.401 "cntlid": 91, 00:18:49.401 "qid": 0, 00:18:49.401 "state": "enabled", 00:18:49.401 "thread": "nvmf_tgt_poll_group_000", 00:18:49.401 "listen_address": { 00:18:49.401 "trtype": "TCP", 00:18:49.401 "adrfam": "IPv4", 00:18:49.401 "traddr": "10.0.0.2", 00:18:49.401 "trsvcid": "4420" 00:18:49.401 }, 00:18:49.401 "peer_address": { 00:18:49.401 "trtype": "TCP", 00:18:49.401 "adrfam": "IPv4", 00:18:49.401 "traddr": "10.0.0.1", 00:18:49.401 "trsvcid": "40302" 00:18:49.401 }, 00:18:49.401 "auth": { 00:18:49.401 "state": "completed", 00:18:49.401 "digest": "sha384", 00:18:49.401 "dhgroup": "ffdhe8192" 00:18:49.401 } 00:18:49.401 } 00:18:49.401 ]' 00:18:49.401 10:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.401 10:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:49.401 10:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.401 10:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:49.401 10:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.401 10:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.401 10:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.401 10:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.662 10:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NDNjYWM2NWMzMjY3ZjA5MGQ4ZTFjNWI4ZDkxOWQ4NjDxaRGm: --dhchap-ctrl-secret DHHC-1:02:NDc3NjRlZmVlM2YwYTI2ZWQ3MDZjMzg0MTZhNjE5ZmZiMTEwOWM0MTBlMjdlMGM2+7Rj5g==: 00:18:50.233 10:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.233 10:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:50.233 10:57:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.233 10:57:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.233 10:57:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.233 10:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.233 10:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:50.233 10:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:50.494 10:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:18:50.494 10:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.494 10:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:50.494 10:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:50.494 10:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:50.494 10:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.494 10:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.494 10:57:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.494 10:57:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.494 10:57:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.494 10:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.494 10:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.067 00:18:51.067 10:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.067 10:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.067 10:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.067 10:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.067 10:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.067 10:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.067 10:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.067 10:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.067 10:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.067 { 00:18:51.067 "cntlid": 93, 00:18:51.067 "qid": 0, 00:18:51.067 "state": "enabled", 00:18:51.067 "thread": "nvmf_tgt_poll_group_000", 00:18:51.067 "listen_address": { 00:18:51.067 "trtype": "TCP", 00:18:51.067 "adrfam": "IPv4", 00:18:51.067 "traddr": "10.0.0.2", 00:18:51.067 "trsvcid": "4420" 00:18:51.067 }, 00:18:51.067 "peer_address": { 00:18:51.067 "trtype": "TCP", 00:18:51.067 "adrfam": "IPv4", 00:18:51.067 "traddr": "10.0.0.1", 00:18:51.067 "trsvcid": "52570" 00:18:51.067 }, 00:18:51.067 "auth": { 00:18:51.067 "state": "completed", 00:18:51.067 "digest": "sha384", 00:18:51.067 "dhgroup": "ffdhe8192" 00:18:51.067 } 00:18:51.067 } 00:18:51.067 ]' 00:18:51.067 10:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.328 10:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:51.328 10:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.328 10:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:51.328 10:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.328 10:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.328 10:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.328 10:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.589 10:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NTg1MjJmMDc2NmNkZWVmMzczMGM0N2VmNTc0NDVhZDAyMmVlYzUwN2E2ZGMyNTc1dRKDRw==: --dhchap-ctrl-secret DHHC-1:01:OTJkMzVlNjQyMTJlZDk1Mzc2ZjMxNWQ3YWE3ODIyYzccR8gb: 00:18:52.161 10:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.161 10:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:52.161 10:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.161 10:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.161 10:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.161 10:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:52.161 10:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:52.161 10:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:52.161 10:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:18:52.161 10:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.161 10:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:52.161 10:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:52.161 10:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:52.161 10:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.161 10:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:52.161 10:57:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.161 10:57:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.161 10:57:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.161 10:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:52.161 10:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:52.730 00:18:52.730 10:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.730 10:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.730 10:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.990 10:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.990 10:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.990 10:57:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.990 10:57:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.990 10:57:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.990 10:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.990 { 00:18:52.990 "cntlid": 95, 00:18:52.990 "qid": 0, 00:18:52.990 "state": "enabled", 00:18:52.990 "thread": "nvmf_tgt_poll_group_000", 00:18:52.990 "listen_address": { 00:18:52.990 "trtype": "TCP", 00:18:52.990 "adrfam": "IPv4", 00:18:52.990 "traddr": "10.0.0.2", 00:18:52.990 "trsvcid": "4420" 00:18:52.990 }, 00:18:52.990 "peer_address": { 00:18:52.990 "trtype": "TCP", 00:18:52.990 "adrfam": "IPv4", 00:18:52.990 "traddr": "10.0.0.1", 00:18:52.990 "trsvcid": "52604" 00:18:52.990 }, 00:18:52.990 "auth": { 00:18:52.990 "state": "completed", 00:18:52.990 "digest": "sha384", 00:18:52.990 "dhgroup": "ffdhe8192" 00:18:52.990 } 00:18:52.990 } 00:18:52.990 ]' 00:18:52.990 10:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.990 10:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:52.990 10:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.990 10:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:52.990 10:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.990 10:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.990 10:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.990 10:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.250 10:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NjkzZWY4NDFkYTc2NzQzYWEwMWMxYTk5NjIxNmZlMjBiOTY4NWQ1OGFmNmE0MGFmYTY4ZTM1NGUwY2JmMjA1MSQUGYw=: 00:18:53.822 10:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.822 10:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:53.822 10:57:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.822 10:57:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.822 10:57:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.822 10:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:53.822 10:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:53.822 10:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.822 10:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:53.822 10:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:54.083 10:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:18:54.083 10:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.083 10:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:54.083 10:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:54.083 10:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:54.083 10:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.083 10:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.083 10:57:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.083 10:57:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.083 10:57:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.083 10:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.083 10:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.343 00:18:54.343 10:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.343 10:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.343 10:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.343 10:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.343 10:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.344 10:57:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.344 10:57:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.344 10:57:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.344 10:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.344 { 00:18:54.344 "cntlid": 97, 00:18:54.344 "qid": 0, 00:18:54.344 "state": "enabled", 00:18:54.344 "thread": "nvmf_tgt_poll_group_000", 00:18:54.344 "listen_address": { 00:18:54.344 "trtype": "TCP", 00:18:54.344 "adrfam": "IPv4", 00:18:54.344 "traddr": "10.0.0.2", 00:18:54.344 "trsvcid": "4420" 00:18:54.344 }, 00:18:54.344 "peer_address": { 00:18:54.344 "trtype": "TCP", 00:18:54.344 "adrfam": "IPv4", 00:18:54.344 "traddr": "10.0.0.1", 00:18:54.344 "trsvcid": "52628" 00:18:54.344 }, 00:18:54.344 "auth": { 00:18:54.344 "state": "completed", 00:18:54.344 "digest": "sha512", 00:18:54.344 "dhgroup": "null" 00:18:54.344 } 00:18:54.344 } 00:18:54.344 ]' 00:18:54.344 10:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.344 10:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:54.344 10:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.604 10:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:54.604 10:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.604 10:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.604 10:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.604 10:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.604 10:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDQ2ZmNmYWFmNDU0ZjJlNmMwZTU5N2IxODA4ZGZmMWFmZGU5ZGUwOWI3NGEzZjcztUvQ6w==: --dhchap-ctrl-secret DHHC-1:03:MWYzOGY1NTVkZWMxOWJlZTA5M2VjMWU3NWU2MDg3YjFkODFlMGQ3ZTlmNmVkYmRhYWUyMTc0ZmE0YjBjOGNhM2ZIXgs=: 00:18:55.544 10:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.544 10:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:55.544 10:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.544 10:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.544 10:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.544 10:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.544 10:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:55.544 10:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:55.544 10:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:18:55.544 10:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.544 10:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:55.544 10:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:55.544 10:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:55.544 10:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.544 10:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.544 10:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.544 10:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.544 10:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.544 10:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.544 10:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.809 00:18:55.809 10:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.809 10:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.809 10:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.134 10:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.134 10:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.134 10:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.134 10:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.134 10:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.134 10:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.134 { 00:18:56.134 "cntlid": 99, 00:18:56.134 "qid": 0, 00:18:56.134 "state": "enabled", 00:18:56.134 "thread": "nvmf_tgt_poll_group_000", 00:18:56.134 "listen_address": { 00:18:56.134 "trtype": "TCP", 00:18:56.134 "adrfam": "IPv4", 00:18:56.134 "traddr": "10.0.0.2", 00:18:56.134 "trsvcid": "4420" 00:18:56.134 }, 00:18:56.134 "peer_address": { 00:18:56.134 "trtype": "TCP", 00:18:56.134 "adrfam": "IPv4", 00:18:56.134 "traddr": "10.0.0.1", 00:18:56.134 "trsvcid": "52646" 00:18:56.134 }, 00:18:56.134 "auth": { 00:18:56.134 "state": "completed", 00:18:56.134 "digest": "sha512", 00:18:56.134 "dhgroup": "null" 00:18:56.134 } 00:18:56.134 } 00:18:56.134 ]' 00:18:56.134 10:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.134 10:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:56.134 10:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.134 10:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:56.134 10:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:56.134 10:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.134 10:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.134 10:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.417 10:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NDNjYWM2NWMzMjY3ZjA5MGQ4ZTFjNWI4ZDkxOWQ4NjDxaRGm: --dhchap-ctrl-secret DHHC-1:02:NDc3NjRlZmVlM2YwYTI2ZWQ3MDZjMzg0MTZhNjE5ZmZiMTEwOWM0MTBlMjdlMGM2+7Rj5g==: 00:18:56.986 10:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.986 10:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:56.986 10:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.986 10:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.986 10:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.986 10:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.986 10:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:56.986 10:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:56.986 10:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:18:56.986 10:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.986 10:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:56.986 10:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:56.986 10:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:56.986 10:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.986 10:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.986 10:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.986 10:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.246 10:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.246 10:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.246 10:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.246 00:18:57.246 10:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:57.246 10:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.246 10:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.506 10:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.506 10:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.506 10:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.506 10:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.506 10:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.506 10:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.506 { 00:18:57.506 "cntlid": 101, 00:18:57.506 "qid": 0, 00:18:57.506 "state": "enabled", 00:18:57.506 "thread": "nvmf_tgt_poll_group_000", 00:18:57.506 "listen_address": { 00:18:57.506 "trtype": "TCP", 00:18:57.506 "adrfam": "IPv4", 00:18:57.506 "traddr": "10.0.0.2", 00:18:57.506 "trsvcid": "4420" 00:18:57.506 }, 00:18:57.506 "peer_address": { 00:18:57.506 "trtype": "TCP", 00:18:57.506 "adrfam": "IPv4", 00:18:57.506 "traddr": "10.0.0.1", 00:18:57.506 "trsvcid": "52672" 00:18:57.506 }, 00:18:57.506 "auth": { 00:18:57.506 "state": "completed", 00:18:57.506 "digest": "sha512", 00:18:57.506 "dhgroup": "null" 00:18:57.506 } 00:18:57.506 } 00:18:57.506 ]' 00:18:57.506 10:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.506 10:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:57.506 10:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.766 10:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:57.766 10:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.766 10:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.766 10:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.766 10:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.766 10:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NTg1MjJmMDc2NmNkZWVmMzczMGM0N2VmNTc0NDVhZDAyMmVlYzUwN2E2ZGMyNTc1dRKDRw==: --dhchap-ctrl-secret DHHC-1:01:OTJkMzVlNjQyMTJlZDk1Mzc2ZjMxNWQ3YWE3ODIyYzccR8gb: 00:18:58.707 10:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.707 10:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:58.707 10:57:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.707 10:57:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.707 10:57:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.707 10:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.707 10:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:58.707 10:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:58.707 10:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:18:58.707 10:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.707 10:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:58.707 10:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:58.707 10:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:58.707 10:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.707 10:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:58.707 10:57:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.707 10:57:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.707 10:57:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.707 10:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:58.707 10:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:58.968 00:18:58.969 10:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.969 10:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.969 10:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.969 10:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.969 10:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.969 10:57:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.969 10:57:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.969 10:57:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.969 10:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.969 { 00:18:58.969 "cntlid": 103, 00:18:58.969 "qid": 0, 00:18:58.969 "state": "enabled", 00:18:58.969 "thread": "nvmf_tgt_poll_group_000", 00:18:58.969 "listen_address": { 00:18:58.969 "trtype": "TCP", 00:18:58.969 "adrfam": "IPv4", 00:18:58.969 "traddr": "10.0.0.2", 00:18:58.969 "trsvcid": "4420" 00:18:58.969 }, 00:18:58.969 "peer_address": { 00:18:58.969 "trtype": "TCP", 00:18:58.969 "adrfam": "IPv4", 00:18:58.969 "traddr": "10.0.0.1", 00:18:58.969 "trsvcid": "52700" 00:18:58.969 }, 00:18:58.969 "auth": { 00:18:58.969 "state": "completed", 00:18:58.969 "digest": "sha512", 00:18:58.969 "dhgroup": "null" 00:18:58.969 } 00:18:58.969 } 00:18:58.969 ]' 00:18:58.969 10:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.230 10:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:59.230 10:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.230 10:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:59.230 10:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.230 10:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.230 10:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.230 10:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.489 10:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NjkzZWY4NDFkYTc2NzQzYWEwMWMxYTk5NjIxNmZlMjBiOTY4NWQ1OGFmNmE0MGFmYTY4ZTM1NGUwY2JmMjA1MSQUGYw=: 00:19:00.058 10:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.058 10:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:00.058 10:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.058 10:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.058 10:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.058 10:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:00.058 10:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.058 10:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:00.058 10:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:00.059 10:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:00.059 10:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.059 10:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:00.059 10:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:00.059 10:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:00.059 10:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.059 10:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.059 10:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.059 10:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.059 10:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.059 10:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.059 10:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.319 00:19:00.319 10:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.319 10:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.319 10:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.580 10:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.580 10:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.580 10:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.580 10:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.580 10:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.580 10:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.580 { 00:19:00.580 "cntlid": 105, 00:19:00.580 "qid": 0, 00:19:00.580 "state": "enabled", 00:19:00.580 "thread": "nvmf_tgt_poll_group_000", 00:19:00.580 "listen_address": { 00:19:00.580 "trtype": "TCP", 00:19:00.580 "adrfam": "IPv4", 00:19:00.580 "traddr": "10.0.0.2", 00:19:00.580 "trsvcid": "4420" 00:19:00.580 }, 00:19:00.580 "peer_address": { 00:19:00.580 "trtype": "TCP", 00:19:00.580 "adrfam": "IPv4", 00:19:00.580 "traddr": "10.0.0.1", 00:19:00.580 "trsvcid": "52724" 00:19:00.580 }, 00:19:00.580 "auth": { 00:19:00.580 "state": "completed", 00:19:00.580 "digest": "sha512", 00:19:00.580 "dhgroup": "ffdhe2048" 00:19:00.580 } 00:19:00.580 } 00:19:00.580 ]' 00:19:00.580 10:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.580 10:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:00.580 10:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.580 10:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:00.580 10:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.841 10:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.841 10:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.841 10:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.841 10:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDQ2ZmNmYWFmNDU0ZjJlNmMwZTU5N2IxODA4ZGZmMWFmZGU5ZGUwOWI3NGEzZjcztUvQ6w==: --dhchap-ctrl-secret DHHC-1:03:MWYzOGY1NTVkZWMxOWJlZTA5M2VjMWU3NWU2MDg3YjFkODFlMGQ3ZTlmNmVkYmRhYWUyMTc0ZmE0YjBjOGNhM2ZIXgs=: 00:19:01.412 10:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.413 10:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:01.413 10:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.413 10:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.673 10:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.673 10:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.673 10:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:01.673 10:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:01.673 10:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:01.673 10:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.673 10:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:01.673 10:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:01.673 10:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:01.673 10:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.673 10:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.673 10:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.673 10:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.673 10:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.673 10:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.673 10:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.934 00:19:01.934 10:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.934 10:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.934 10:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.195 10:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.195 10:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.195 10:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.195 10:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.195 10:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.195 10:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.195 { 00:19:02.195 "cntlid": 107, 00:19:02.195 "qid": 0, 00:19:02.195 "state": "enabled", 00:19:02.195 "thread": "nvmf_tgt_poll_group_000", 00:19:02.195 "listen_address": { 00:19:02.195 "trtype": "TCP", 00:19:02.195 "adrfam": "IPv4", 00:19:02.195 "traddr": "10.0.0.2", 00:19:02.195 "trsvcid": "4420" 00:19:02.195 }, 00:19:02.195 "peer_address": { 00:19:02.195 "trtype": "TCP", 00:19:02.195 "adrfam": "IPv4", 00:19:02.195 "traddr": "10.0.0.1", 00:19:02.195 "trsvcid": "34018" 00:19:02.195 }, 00:19:02.195 "auth": { 00:19:02.195 "state": "completed", 00:19:02.195 "digest": "sha512", 00:19:02.195 "dhgroup": "ffdhe2048" 00:19:02.195 } 00:19:02.195 } 00:19:02.195 ]' 00:19:02.195 10:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.195 10:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:02.195 10:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.195 10:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:02.195 10:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.195 10:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.195 10:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.195 10:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.457 10:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NDNjYWM2NWMzMjY3ZjA5MGQ4ZTFjNWI4ZDkxOWQ4NjDxaRGm: --dhchap-ctrl-secret DHHC-1:02:NDc3NjRlZmVlM2YwYTI2ZWQ3MDZjMzg0MTZhNjE5ZmZiMTEwOWM0MTBlMjdlMGM2+7Rj5g==: 00:19:03.028 10:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.028 10:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:03.028 10:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.028 10:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.028 10:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.028 10:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.028 10:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:03.028 10:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:03.312 10:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:03.312 10:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.312 10:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:03.312 10:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:03.312 10:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:03.312 10:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.312 10:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.312 10:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.312 10:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.312 10:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.312 10:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.312 10:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.573 00:19:03.573 10:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.573 10:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.573 10:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.573 10:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.573 10:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.573 10:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.573 10:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.573 10:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.573 10:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.573 { 00:19:03.573 "cntlid": 109, 00:19:03.573 "qid": 0, 00:19:03.573 "state": "enabled", 00:19:03.573 "thread": "nvmf_tgt_poll_group_000", 00:19:03.573 "listen_address": { 00:19:03.573 "trtype": "TCP", 00:19:03.573 "adrfam": "IPv4", 00:19:03.573 "traddr": "10.0.0.2", 00:19:03.573 "trsvcid": "4420" 00:19:03.573 }, 00:19:03.573 "peer_address": { 00:19:03.573 "trtype": "TCP", 00:19:03.573 "adrfam": "IPv4", 00:19:03.573 "traddr": "10.0.0.1", 00:19:03.573 "trsvcid": "34044" 00:19:03.573 }, 00:19:03.573 "auth": { 00:19:03.573 "state": "completed", 00:19:03.573 "digest": "sha512", 00:19:03.573 "dhgroup": "ffdhe2048" 00:19:03.573 } 00:19:03.573 } 00:19:03.573 ]' 00:19:03.573 10:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.833 10:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:03.833 10:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.833 10:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:03.833 10:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.833 10:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.833 10:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.833 10:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.093 10:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NTg1MjJmMDc2NmNkZWVmMzczMGM0N2VmNTc0NDVhZDAyMmVlYzUwN2E2ZGMyNTc1dRKDRw==: --dhchap-ctrl-secret DHHC-1:01:OTJkMzVlNjQyMTJlZDk1Mzc2ZjMxNWQ3YWE3ODIyYzccR8gb: 00:19:04.664 10:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.664 10:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:04.664 10:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.664 10:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.664 10:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.664 10:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.664 10:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:04.664 10:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:04.925 10:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:04.925 10:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.925 10:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:04.925 10:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:04.925 10:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:04.925 10:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.925 10:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:04.925 10:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.925 10:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.925 10:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.925 10:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:04.925 10:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:04.925 00:19:05.185 10:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.185 10:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.186 10:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.186 10:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.186 10:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.186 10:57:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.186 10:57:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.186 10:57:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.186 10:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.186 { 00:19:05.186 "cntlid": 111, 00:19:05.186 "qid": 0, 00:19:05.186 "state": "enabled", 00:19:05.186 "thread": "nvmf_tgt_poll_group_000", 00:19:05.186 "listen_address": { 00:19:05.186 "trtype": "TCP", 00:19:05.186 "adrfam": "IPv4", 00:19:05.186 "traddr": "10.0.0.2", 00:19:05.186 "trsvcid": "4420" 00:19:05.186 }, 00:19:05.186 "peer_address": { 00:19:05.186 "trtype": "TCP", 00:19:05.186 "adrfam": "IPv4", 00:19:05.186 "traddr": "10.0.0.1", 00:19:05.186 "trsvcid": "34076" 00:19:05.186 }, 00:19:05.186 "auth": { 00:19:05.186 "state": "completed", 00:19:05.186 "digest": "sha512", 00:19:05.186 "dhgroup": "ffdhe2048" 00:19:05.186 } 00:19:05.186 } 00:19:05.186 ]' 00:19:05.186 10:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.186 10:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:05.186 10:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.447 10:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:05.447 10:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.447 10:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.447 10:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.447 10:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.447 10:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NjkzZWY4NDFkYTc2NzQzYWEwMWMxYTk5NjIxNmZlMjBiOTY4NWQ1OGFmNmE0MGFmYTY4ZTM1NGUwY2JmMjA1MSQUGYw=: 00:19:06.390 10:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.390 10:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:06.390 10:57:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.390 10:57:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.390 10:57:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.390 10:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:06.390 10:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.390 10:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:06.390 10:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:06.390 10:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:06.390 10:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.390 10:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:06.390 10:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:06.390 10:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:06.390 10:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.390 10:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.390 10:57:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.390 10:57:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.390 10:57:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.390 10:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.390 10:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.650 00:19:06.650 10:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.650 10:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:06.650 10:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.650 10:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.650 10:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.650 10:57:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.650 10:57:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.650 10:57:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.650 10:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:06.650 { 00:19:06.650 "cntlid": 113, 00:19:06.650 "qid": 0, 00:19:06.650 "state": "enabled", 00:19:06.650 "thread": "nvmf_tgt_poll_group_000", 00:19:06.650 "listen_address": { 00:19:06.650 "trtype": "TCP", 00:19:06.650 "adrfam": "IPv4", 00:19:06.650 "traddr": "10.0.0.2", 00:19:06.650 "trsvcid": "4420" 00:19:06.650 }, 00:19:06.650 "peer_address": { 00:19:06.650 "trtype": "TCP", 00:19:06.650 "adrfam": "IPv4", 00:19:06.650 "traddr": "10.0.0.1", 00:19:06.650 "trsvcid": "34098" 00:19:06.650 }, 00:19:06.650 "auth": { 00:19:06.650 "state": "completed", 00:19:06.650 "digest": "sha512", 00:19:06.650 "dhgroup": "ffdhe3072" 00:19:06.650 } 00:19:06.650 } 00:19:06.650 ]' 00:19:06.650 10:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:06.910 10:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:06.910 10:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:06.910 10:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:06.910 10:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:06.910 10:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.910 10:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.910 10:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.170 10:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDQ2ZmNmYWFmNDU0ZjJlNmMwZTU5N2IxODA4ZGZmMWFmZGU5ZGUwOWI3NGEzZjcztUvQ6w==: --dhchap-ctrl-secret DHHC-1:03:MWYzOGY1NTVkZWMxOWJlZTA5M2VjMWU3NWU2MDg3YjFkODFlMGQ3ZTlmNmVkYmRhYWUyMTc0ZmE0YjBjOGNhM2ZIXgs=: 00:19:07.741 10:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.741 10:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:07.741 10:57:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.741 10:57:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.741 10:57:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.741 10:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.741 10:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:07.741 10:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:08.002 10:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:08.002 10:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.002 10:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:08.002 10:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:08.002 10:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:08.002 10:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.002 10:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.002 10:57:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.002 10:57:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.002 10:57:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.002 10:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.002 10:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.002 00:19:08.262 10:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.262 10:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.262 10:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.262 10:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.262 10:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.262 10:57:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.262 10:57:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.262 10:57:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.262 10:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.262 { 00:19:08.262 "cntlid": 115, 00:19:08.262 "qid": 0, 00:19:08.262 "state": "enabled", 00:19:08.262 "thread": "nvmf_tgt_poll_group_000", 00:19:08.262 "listen_address": { 00:19:08.262 "trtype": "TCP", 00:19:08.262 "adrfam": "IPv4", 00:19:08.262 "traddr": "10.0.0.2", 00:19:08.262 "trsvcid": "4420" 00:19:08.262 }, 00:19:08.262 "peer_address": { 00:19:08.262 "trtype": "TCP", 00:19:08.262 "adrfam": "IPv4", 00:19:08.262 "traddr": "10.0.0.1", 00:19:08.262 "trsvcid": "34120" 00:19:08.262 }, 00:19:08.262 "auth": { 00:19:08.262 "state": "completed", 00:19:08.262 "digest": "sha512", 00:19:08.262 "dhgroup": "ffdhe3072" 00:19:08.262 } 00:19:08.262 } 00:19:08.262 ]' 00:19:08.262 10:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.262 10:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:08.262 10:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.523 10:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:08.523 10:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.523 10:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.523 10:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.523 10:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.523 10:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NDNjYWM2NWMzMjY3ZjA5MGQ4ZTFjNWI4ZDkxOWQ4NjDxaRGm: --dhchap-ctrl-secret DHHC-1:02:NDc3NjRlZmVlM2YwYTI2ZWQ3MDZjMzg0MTZhNjE5ZmZiMTEwOWM0MTBlMjdlMGM2+7Rj5g==: 00:19:09.465 10:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.465 10:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:09.465 10:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.465 10:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.465 10:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.465 10:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.465 10:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:09.465 10:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:09.465 10:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:09.465 10:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.465 10:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:09.465 10:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:09.465 10:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:09.465 10:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.465 10:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.465 10:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.465 10:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.465 10:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.465 10:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.465 10:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.725 00:19:09.725 10:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.725 10:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.725 10:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.985 10:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.985 10:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.985 10:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.985 10:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.985 10:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.985 10:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.985 { 00:19:09.985 "cntlid": 117, 00:19:09.985 "qid": 0, 00:19:09.985 "state": "enabled", 00:19:09.985 "thread": "nvmf_tgt_poll_group_000", 00:19:09.985 "listen_address": { 00:19:09.985 "trtype": "TCP", 00:19:09.985 "adrfam": "IPv4", 00:19:09.985 "traddr": "10.0.0.2", 00:19:09.985 "trsvcid": "4420" 00:19:09.985 }, 00:19:09.985 "peer_address": { 00:19:09.985 "trtype": "TCP", 00:19:09.985 "adrfam": "IPv4", 00:19:09.985 "traddr": "10.0.0.1", 00:19:09.985 "trsvcid": "34154" 00:19:09.985 }, 00:19:09.985 "auth": { 00:19:09.985 "state": "completed", 00:19:09.985 "digest": "sha512", 00:19:09.985 "dhgroup": "ffdhe3072" 00:19:09.985 } 00:19:09.985 } 00:19:09.985 ]' 00:19:09.985 10:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.985 10:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:09.985 10:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.985 10:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:09.986 10:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.986 10:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.986 10:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.986 10:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.246 10:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NTg1MjJmMDc2NmNkZWVmMzczMGM0N2VmNTc0NDVhZDAyMmVlYzUwN2E2ZGMyNTc1dRKDRw==: --dhchap-ctrl-secret DHHC-1:01:OTJkMzVlNjQyMTJlZDk1Mzc2ZjMxNWQ3YWE3ODIyYzccR8gb: 00:19:10.817 10:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.817 10:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:10.817 10:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.817 10:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.817 10:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.817 10:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.817 10:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:10.817 10:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:11.078 10:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:11.078 10:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.078 10:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:11.078 10:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:11.078 10:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:11.078 10:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.078 10:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:11.078 10:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.078 10:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.078 10:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.078 10:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:11.078 10:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:11.339 00:19:11.339 10:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.339 10:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.339 10:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.339 10:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.339 10:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.339 10:57:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.339 10:57:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.339 10:57:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.339 10:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.339 { 00:19:11.339 "cntlid": 119, 00:19:11.339 "qid": 0, 00:19:11.339 "state": "enabled", 00:19:11.339 "thread": "nvmf_tgt_poll_group_000", 00:19:11.339 "listen_address": { 00:19:11.339 "trtype": "TCP", 00:19:11.339 "adrfam": "IPv4", 00:19:11.339 "traddr": "10.0.0.2", 00:19:11.339 "trsvcid": "4420" 00:19:11.339 }, 00:19:11.339 "peer_address": { 00:19:11.339 "trtype": "TCP", 00:19:11.339 "adrfam": "IPv4", 00:19:11.339 "traddr": "10.0.0.1", 00:19:11.339 "trsvcid": "51672" 00:19:11.339 }, 00:19:11.339 "auth": { 00:19:11.339 "state": "completed", 00:19:11.339 "digest": "sha512", 00:19:11.339 "dhgroup": "ffdhe3072" 00:19:11.339 } 00:19:11.339 } 00:19:11.339 ]' 00:19:11.339 10:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.339 10:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:11.339 10:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.600 10:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:11.600 10:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.600 10:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.600 10:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.600 10:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.600 10:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NjkzZWY4NDFkYTc2NzQzYWEwMWMxYTk5NjIxNmZlMjBiOTY4NWQ1OGFmNmE0MGFmYTY4ZTM1NGUwY2JmMjA1MSQUGYw=: 00:19:12.542 10:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.542 10:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:12.542 10:57:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.542 10:57:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.542 10:57:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.542 10:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:12.542 10:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.542 10:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:12.542 10:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:12.542 10:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:12.542 10:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.542 10:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:12.542 10:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:12.542 10:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:12.542 10:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.542 10:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.542 10:57:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.542 10:57:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.542 10:57:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.542 10:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.542 10:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.803 00:19:12.803 10:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.803 10:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.803 10:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.063 10:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.063 10:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.063 10:57:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.063 10:57:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.063 10:57:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.063 10:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.063 { 00:19:13.063 "cntlid": 121, 00:19:13.063 "qid": 0, 00:19:13.063 "state": "enabled", 00:19:13.063 "thread": "nvmf_tgt_poll_group_000", 00:19:13.063 "listen_address": { 00:19:13.063 "trtype": "TCP", 00:19:13.063 "adrfam": "IPv4", 00:19:13.063 "traddr": "10.0.0.2", 00:19:13.063 "trsvcid": "4420" 00:19:13.063 }, 00:19:13.063 "peer_address": { 00:19:13.063 "trtype": "TCP", 00:19:13.063 "adrfam": "IPv4", 00:19:13.063 "traddr": "10.0.0.1", 00:19:13.063 "trsvcid": "51692" 00:19:13.063 }, 00:19:13.063 "auth": { 00:19:13.063 "state": "completed", 00:19:13.063 "digest": "sha512", 00:19:13.063 "dhgroup": "ffdhe4096" 00:19:13.063 } 00:19:13.063 } 00:19:13.063 ]' 00:19:13.063 10:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.063 10:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:13.063 10:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.063 10:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:13.063 10:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.063 10:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.063 10:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.064 10:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.324 10:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDQ2ZmNmYWFmNDU0ZjJlNmMwZTU5N2IxODA4ZGZmMWFmZGU5ZGUwOWI3NGEzZjcztUvQ6w==: --dhchap-ctrl-secret DHHC-1:03:MWYzOGY1NTVkZWMxOWJlZTA5M2VjMWU3NWU2MDg3YjFkODFlMGQ3ZTlmNmVkYmRhYWUyMTc0ZmE0YjBjOGNhM2ZIXgs=: 00:19:13.897 10:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.897 10:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:13.897 10:57:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.897 10:57:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.897 10:57:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.897 10:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.897 10:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:13.897 10:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:14.158 10:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:14.158 10:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.158 10:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:14.158 10:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:14.158 10:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:14.158 10:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.158 10:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.158 10:57:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.158 10:57:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.158 10:57:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.158 10:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.158 10:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.419 00:19:14.419 10:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.419 10:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.419 10:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.419 10:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.419 10:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.419 10:57:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.419 10:57:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.680 10:57:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.680 10:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.680 { 00:19:14.680 "cntlid": 123, 00:19:14.680 "qid": 0, 00:19:14.680 "state": "enabled", 00:19:14.680 "thread": "nvmf_tgt_poll_group_000", 00:19:14.680 "listen_address": { 00:19:14.680 "trtype": "TCP", 00:19:14.680 "adrfam": "IPv4", 00:19:14.680 "traddr": "10.0.0.2", 00:19:14.680 "trsvcid": "4420" 00:19:14.680 }, 00:19:14.680 "peer_address": { 00:19:14.680 "trtype": "TCP", 00:19:14.680 "adrfam": "IPv4", 00:19:14.680 "traddr": "10.0.0.1", 00:19:14.680 "trsvcid": "51698" 00:19:14.680 }, 00:19:14.680 "auth": { 00:19:14.680 "state": "completed", 00:19:14.680 "digest": "sha512", 00:19:14.680 "dhgroup": "ffdhe4096" 00:19:14.680 } 00:19:14.680 } 00:19:14.680 ]' 00:19:14.680 10:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.680 10:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:14.680 10:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.680 10:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:14.680 10:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.680 10:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.680 10:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.680 10:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.940 10:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NDNjYWM2NWMzMjY3ZjA5MGQ4ZTFjNWI4ZDkxOWQ4NjDxaRGm: --dhchap-ctrl-secret DHHC-1:02:NDc3NjRlZmVlM2YwYTI2ZWQ3MDZjMzg0MTZhNjE5ZmZiMTEwOWM0MTBlMjdlMGM2+7Rj5g==: 00:19:15.509 10:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.509 10:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:15.509 10:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.509 10:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.509 10:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.509 10:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.509 10:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:15.509 10:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:15.770 10:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:15.770 10:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.770 10:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:15.770 10:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:15.770 10:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:15.770 10:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.770 10:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.770 10:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.770 10:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.770 10:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.770 10:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.770 10:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.030 00:19:16.030 10:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.030 10:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.030 10:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.030 10:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.030 10:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.030 10:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.031 10:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.031 10:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.031 10:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.031 { 00:19:16.031 "cntlid": 125, 00:19:16.031 "qid": 0, 00:19:16.031 "state": "enabled", 00:19:16.031 "thread": "nvmf_tgt_poll_group_000", 00:19:16.031 "listen_address": { 00:19:16.031 "trtype": "TCP", 00:19:16.031 "adrfam": "IPv4", 00:19:16.031 "traddr": "10.0.0.2", 00:19:16.031 "trsvcid": "4420" 00:19:16.031 }, 00:19:16.031 "peer_address": { 00:19:16.031 "trtype": "TCP", 00:19:16.031 "adrfam": "IPv4", 00:19:16.031 "traddr": "10.0.0.1", 00:19:16.031 "trsvcid": "51718" 00:19:16.031 }, 00:19:16.031 "auth": { 00:19:16.031 "state": "completed", 00:19:16.031 "digest": "sha512", 00:19:16.031 "dhgroup": "ffdhe4096" 00:19:16.031 } 00:19:16.031 } 00:19:16.031 ]' 00:19:16.031 10:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.031 10:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:16.031 10:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.291 10:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:16.291 10:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.291 10:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.291 10:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.291 10:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.551 10:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NTg1MjJmMDc2NmNkZWVmMzczMGM0N2VmNTc0NDVhZDAyMmVlYzUwN2E2ZGMyNTc1dRKDRw==: --dhchap-ctrl-secret DHHC-1:01:OTJkMzVlNjQyMTJlZDk1Mzc2ZjMxNWQ3YWE3ODIyYzccR8gb: 00:19:17.174 10:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.174 10:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:17.174 10:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.174 10:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.174 10:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.174 10:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.174 10:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:17.174 10:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:17.174 10:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:17.174 10:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.174 10:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:17.174 10:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:17.174 10:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:17.174 10:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.174 10:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:17.174 10:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.174 10:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.174 10:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.174 10:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:17.174 10:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:17.434 00:19:17.434 10:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.434 10:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.434 10:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.695 10:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.695 10:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.695 10:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.695 10:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.695 10:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.695 10:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.695 { 00:19:17.695 "cntlid": 127, 00:19:17.695 "qid": 0, 00:19:17.695 "state": "enabled", 00:19:17.695 "thread": "nvmf_tgt_poll_group_000", 00:19:17.695 "listen_address": { 00:19:17.695 "trtype": "TCP", 00:19:17.695 "adrfam": "IPv4", 00:19:17.695 "traddr": "10.0.0.2", 00:19:17.695 "trsvcid": "4420" 00:19:17.695 }, 00:19:17.695 "peer_address": { 00:19:17.695 "trtype": "TCP", 00:19:17.695 "adrfam": "IPv4", 00:19:17.695 "traddr": "10.0.0.1", 00:19:17.695 "trsvcid": "51736" 00:19:17.695 }, 00:19:17.695 "auth": { 00:19:17.695 "state": "completed", 00:19:17.695 "digest": "sha512", 00:19:17.695 "dhgroup": "ffdhe4096" 00:19:17.695 } 00:19:17.695 } 00:19:17.695 ]' 00:19:17.695 10:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.695 10:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:17.695 10:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.695 10:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:17.695 10:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.956 10:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.956 10:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.956 10:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.956 10:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NjkzZWY4NDFkYTc2NzQzYWEwMWMxYTk5NjIxNmZlMjBiOTY4NWQ1OGFmNmE0MGFmYTY4ZTM1NGUwY2JmMjA1MSQUGYw=: 00:19:18.528 10:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.528 10:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:18.528 10:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.528 10:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.528 10:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.528 10:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:18.528 10:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:18.528 10:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:18.528 10:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:18.788 10:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:18.788 10:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.788 10:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:18.788 10:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:18.788 10:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:18.788 10:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.788 10:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.788 10:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.788 10:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.788 10:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.788 10:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.788 10:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.049 00:19:19.049 10:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:19.049 10:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:19.049 10:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.310 10:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.310 10:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.310 10:57:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.310 10:57:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.310 10:57:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.310 10:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:19.310 { 00:19:19.310 "cntlid": 129, 00:19:19.310 "qid": 0, 00:19:19.310 "state": "enabled", 00:19:19.310 "thread": "nvmf_tgt_poll_group_000", 00:19:19.310 "listen_address": { 00:19:19.310 "trtype": "TCP", 00:19:19.310 "adrfam": "IPv4", 00:19:19.310 "traddr": "10.0.0.2", 00:19:19.310 "trsvcid": "4420" 00:19:19.310 }, 00:19:19.310 "peer_address": { 00:19:19.310 "trtype": "TCP", 00:19:19.310 "adrfam": "IPv4", 00:19:19.310 "traddr": "10.0.0.1", 00:19:19.310 "trsvcid": "51762" 00:19:19.310 }, 00:19:19.310 "auth": { 00:19:19.310 "state": "completed", 00:19:19.310 "digest": "sha512", 00:19:19.310 "dhgroup": "ffdhe6144" 00:19:19.310 } 00:19:19.310 } 00:19:19.310 ]' 00:19:19.310 10:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:19.310 10:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:19.310 10:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.310 10:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:19.310 10:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:19.571 10:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.571 10:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.571 10:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.571 10:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDQ2ZmNmYWFmNDU0ZjJlNmMwZTU5N2IxODA4ZGZmMWFmZGU5ZGUwOWI3NGEzZjcztUvQ6w==: --dhchap-ctrl-secret DHHC-1:03:MWYzOGY1NTVkZWMxOWJlZTA5M2VjMWU3NWU2MDg3YjFkODFlMGQ3ZTlmNmVkYmRhYWUyMTc0ZmE0YjBjOGNhM2ZIXgs=: 00:19:20.513 10:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.513 10:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:20.513 10:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.513 10:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.513 10:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.513 10:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.513 10:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:20.513 10:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:20.513 10:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:20.513 10:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.513 10:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:20.513 10:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:20.513 10:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:20.513 10:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.513 10:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.513 10:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.513 10:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.513 10:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.513 10:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.514 10:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.774 00:19:20.774 10:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.774 10:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.774 10:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.035 10:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.035 10:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.035 10:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.035 10:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.035 10:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.035 10:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.035 { 00:19:21.035 "cntlid": 131, 00:19:21.035 "qid": 0, 00:19:21.035 "state": "enabled", 00:19:21.035 "thread": "nvmf_tgt_poll_group_000", 00:19:21.035 "listen_address": { 00:19:21.035 "trtype": "TCP", 00:19:21.035 "adrfam": "IPv4", 00:19:21.035 "traddr": "10.0.0.2", 00:19:21.035 "trsvcid": "4420" 00:19:21.035 }, 00:19:21.035 "peer_address": { 00:19:21.035 "trtype": "TCP", 00:19:21.035 "adrfam": "IPv4", 00:19:21.035 "traddr": "10.0.0.1", 00:19:21.035 "trsvcid": "36224" 00:19:21.035 }, 00:19:21.035 "auth": { 00:19:21.035 "state": "completed", 00:19:21.035 "digest": "sha512", 00:19:21.035 "dhgroup": "ffdhe6144" 00:19:21.035 } 00:19:21.035 } 00:19:21.035 ]' 00:19:21.035 10:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.035 10:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:21.035 10:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:21.035 10:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:21.035 10:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.035 10:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.035 10:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.035 10:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.296 10:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NDNjYWM2NWMzMjY3ZjA5MGQ4ZTFjNWI4ZDkxOWQ4NjDxaRGm: --dhchap-ctrl-secret DHHC-1:02:NDc3NjRlZmVlM2YwYTI2ZWQ3MDZjMzg0MTZhNjE5ZmZiMTEwOWM0MTBlMjdlMGM2+7Rj5g==: 00:19:21.868 10:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.868 10:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:21.868 10:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.868 10:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.868 10:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.868 10:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.868 10:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:22.130 10:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:22.130 10:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:19:22.130 10:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.130 10:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:22.130 10:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:22.130 10:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:22.130 10:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.130 10:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.130 10:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.130 10:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.130 10:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.130 10:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.130 10:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.390 00:19:22.390 10:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.390 10:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.390 10:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.651 10:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.651 10:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.651 10:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.651 10:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.651 10:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.651 10:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.651 { 00:19:22.651 "cntlid": 133, 00:19:22.651 "qid": 0, 00:19:22.651 "state": "enabled", 00:19:22.651 "thread": "nvmf_tgt_poll_group_000", 00:19:22.651 "listen_address": { 00:19:22.651 "trtype": "TCP", 00:19:22.651 "adrfam": "IPv4", 00:19:22.651 "traddr": "10.0.0.2", 00:19:22.651 "trsvcid": "4420" 00:19:22.651 }, 00:19:22.651 "peer_address": { 00:19:22.651 "trtype": "TCP", 00:19:22.651 "adrfam": "IPv4", 00:19:22.651 "traddr": "10.0.0.1", 00:19:22.651 "trsvcid": "36258" 00:19:22.651 }, 00:19:22.651 "auth": { 00:19:22.651 "state": "completed", 00:19:22.651 "digest": "sha512", 00:19:22.651 "dhgroup": "ffdhe6144" 00:19:22.651 } 00:19:22.651 } 00:19:22.651 ]' 00:19:22.651 10:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.651 10:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:22.651 10:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.651 10:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:22.651 10:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.912 10:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.912 10:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.912 10:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.912 10:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NTg1MjJmMDc2NmNkZWVmMzczMGM0N2VmNTc0NDVhZDAyMmVlYzUwN2E2ZGMyNTc1dRKDRw==: --dhchap-ctrl-secret DHHC-1:01:OTJkMzVlNjQyMTJlZDk1Mzc2ZjMxNWQ3YWE3ODIyYzccR8gb: 00:19:23.854 10:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.854 10:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:23.854 10:57:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.854 10:57:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.854 10:57:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.854 10:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:23.854 10:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:23.854 10:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:23.854 10:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:23.854 10:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:23.854 10:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:23.854 10:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:23.854 10:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:23.854 10:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.854 10:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:23.854 10:57:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.854 10:57:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.854 10:57:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.854 10:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:23.854 10:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:24.116 00:19:24.116 10:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.116 10:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.116 10:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.377 10:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.377 10:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.377 10:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.377 10:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.377 10:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.377 10:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.377 { 00:19:24.377 "cntlid": 135, 00:19:24.377 "qid": 0, 00:19:24.377 "state": "enabled", 00:19:24.377 "thread": "nvmf_tgt_poll_group_000", 00:19:24.377 "listen_address": { 00:19:24.377 "trtype": "TCP", 00:19:24.377 "adrfam": "IPv4", 00:19:24.377 "traddr": "10.0.0.2", 00:19:24.377 "trsvcid": "4420" 00:19:24.377 }, 00:19:24.377 "peer_address": { 00:19:24.377 "trtype": "TCP", 00:19:24.377 "adrfam": "IPv4", 00:19:24.377 "traddr": "10.0.0.1", 00:19:24.377 "trsvcid": "36282" 00:19:24.377 }, 00:19:24.377 "auth": { 00:19:24.377 "state": "completed", 00:19:24.377 "digest": "sha512", 00:19:24.377 "dhgroup": "ffdhe6144" 00:19:24.377 } 00:19:24.377 } 00:19:24.377 ]' 00:19:24.377 10:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.377 10:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:24.377 10:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.377 10:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:24.377 10:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.377 10:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.377 10:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.377 10:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.638 10:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NjkzZWY4NDFkYTc2NzQzYWEwMWMxYTk5NjIxNmZlMjBiOTY4NWQ1OGFmNmE0MGFmYTY4ZTM1NGUwY2JmMjA1MSQUGYw=: 00:19:25.209 10:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.209 10:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:25.209 10:57:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.209 10:57:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.209 10:57:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.209 10:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:25.209 10:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.209 10:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:25.209 10:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:25.470 10:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:25.470 10:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.470 10:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:25.470 10:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:25.470 10:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:25.470 10:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.470 10:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.470 10:57:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.470 10:57:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.470 10:57:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.470 10:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.471 10:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.731 00:19:25.731 10:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:25.731 10:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.731 10:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.992 10:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.992 10:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.992 10:57:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.992 10:57:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.992 10:57:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.992 10:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.992 { 00:19:25.992 "cntlid": 137, 00:19:25.992 "qid": 0, 00:19:25.992 "state": "enabled", 00:19:25.992 "thread": "nvmf_tgt_poll_group_000", 00:19:25.992 "listen_address": { 00:19:25.992 "trtype": "TCP", 00:19:25.992 "adrfam": "IPv4", 00:19:25.992 "traddr": "10.0.0.2", 00:19:25.992 "trsvcid": "4420" 00:19:25.992 }, 00:19:25.992 "peer_address": { 00:19:25.992 "trtype": "TCP", 00:19:25.992 "adrfam": "IPv4", 00:19:25.992 "traddr": "10.0.0.1", 00:19:25.992 "trsvcid": "36318" 00:19:25.992 }, 00:19:25.992 "auth": { 00:19:25.992 "state": "completed", 00:19:25.992 "digest": "sha512", 00:19:25.992 "dhgroup": "ffdhe8192" 00:19:25.992 } 00:19:25.992 } 00:19:25.992 ]' 00:19:25.992 10:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.992 10:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:25.992 10:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.253 10:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:26.253 10:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.253 10:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.253 10:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.253 10:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.253 10:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDQ2ZmNmYWFmNDU0ZjJlNmMwZTU5N2IxODA4ZGZmMWFmZGU5ZGUwOWI3NGEzZjcztUvQ6w==: --dhchap-ctrl-secret DHHC-1:03:MWYzOGY1NTVkZWMxOWJlZTA5M2VjMWU3NWU2MDg3YjFkODFlMGQ3ZTlmNmVkYmRhYWUyMTc0ZmE0YjBjOGNhM2ZIXgs=: 00:19:27.194 10:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.194 10:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:27.194 10:57:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.194 10:57:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.194 10:57:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.194 10:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.194 10:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:27.194 10:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:27.194 10:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:27.194 10:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.194 10:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:27.194 10:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:27.194 10:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:27.194 10:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.194 10:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.194 10:57:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.194 10:57:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.194 10:57:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.194 10:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.194 10:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.764 00:19:27.764 10:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.764 10:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.764 10:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.764 10:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.764 10:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.764 10:57:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.764 10:57:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.764 10:57:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.764 10:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.764 { 00:19:27.764 "cntlid": 139, 00:19:27.764 "qid": 0, 00:19:27.764 "state": "enabled", 00:19:27.764 "thread": "nvmf_tgt_poll_group_000", 00:19:27.764 "listen_address": { 00:19:27.764 "trtype": "TCP", 00:19:27.764 "adrfam": "IPv4", 00:19:27.764 "traddr": "10.0.0.2", 00:19:27.764 "trsvcid": "4420" 00:19:27.764 }, 00:19:27.764 "peer_address": { 00:19:27.764 "trtype": "TCP", 00:19:27.764 "adrfam": "IPv4", 00:19:27.764 "traddr": "10.0.0.1", 00:19:27.764 "trsvcid": "36362" 00:19:27.764 }, 00:19:27.764 "auth": { 00:19:27.764 "state": "completed", 00:19:27.764 "digest": "sha512", 00:19:27.764 "dhgroup": "ffdhe8192" 00:19:27.764 } 00:19:27.764 } 00:19:27.764 ]' 00:19:27.764 10:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.764 10:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:27.764 10:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.025 10:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:28.025 10:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.025 10:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.025 10:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.025 10:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.286 10:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NDNjYWM2NWMzMjY3ZjA5MGQ4ZTFjNWI4ZDkxOWQ4NjDxaRGm: --dhchap-ctrl-secret DHHC-1:02:NDc3NjRlZmVlM2YwYTI2ZWQ3MDZjMzg0MTZhNjE5ZmZiMTEwOWM0MTBlMjdlMGM2+7Rj5g==: 00:19:28.857 10:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.857 10:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:28.857 10:57:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.857 10:57:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.857 10:57:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.857 10:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.857 10:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:28.857 10:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:28.857 10:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:28.857 10:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.857 10:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:28.857 10:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:28.857 10:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:28.857 10:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.857 10:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.857 10:57:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.857 10:57:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.118 10:57:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.118 10:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.118 10:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.377 00:19:29.377 10:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.377 10:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.377 10:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.637 10:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.637 10:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.637 10:57:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.637 10:57:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.638 10:57:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.638 10:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.638 { 00:19:29.638 "cntlid": 141, 00:19:29.638 "qid": 0, 00:19:29.638 "state": "enabled", 00:19:29.638 "thread": "nvmf_tgt_poll_group_000", 00:19:29.638 "listen_address": { 00:19:29.638 "trtype": "TCP", 00:19:29.638 "adrfam": "IPv4", 00:19:29.638 "traddr": "10.0.0.2", 00:19:29.638 "trsvcid": "4420" 00:19:29.638 }, 00:19:29.638 "peer_address": { 00:19:29.638 "trtype": "TCP", 00:19:29.638 "adrfam": "IPv4", 00:19:29.638 "traddr": "10.0.0.1", 00:19:29.638 "trsvcid": "36398" 00:19:29.638 }, 00:19:29.638 "auth": { 00:19:29.638 "state": "completed", 00:19:29.638 "digest": "sha512", 00:19:29.638 "dhgroup": "ffdhe8192" 00:19:29.638 } 00:19:29.638 } 00:19:29.638 ]' 00:19:29.638 10:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.638 10:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:29.638 10:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.638 10:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:29.638 10:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.898 10:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.898 10:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.898 10:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.898 10:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NTg1MjJmMDc2NmNkZWVmMzczMGM0N2VmNTc0NDVhZDAyMmVlYzUwN2E2ZGMyNTc1dRKDRw==: --dhchap-ctrl-secret DHHC-1:01:OTJkMzVlNjQyMTJlZDk1Mzc2ZjMxNWQ3YWE3ODIyYzccR8gb: 00:19:30.467 10:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.728 10:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:30.728 10:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.728 10:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.728 10:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.728 10:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.728 10:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:30.728 10:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:30.728 10:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:19:30.728 10:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.728 10:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:30.728 10:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:30.728 10:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:30.728 10:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.728 10:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:30.728 10:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.728 10:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.728 10:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.728 10:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.728 10:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:31.299 00:19:31.299 10:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.299 10:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.299 10:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.299 10:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.299 10:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.299 10:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.299 10:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.299 10:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.299 10:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.299 { 00:19:31.299 "cntlid": 143, 00:19:31.299 "qid": 0, 00:19:31.299 "state": "enabled", 00:19:31.299 "thread": "nvmf_tgt_poll_group_000", 00:19:31.299 "listen_address": { 00:19:31.299 "trtype": "TCP", 00:19:31.299 "adrfam": "IPv4", 00:19:31.299 "traddr": "10.0.0.2", 00:19:31.299 "trsvcid": "4420" 00:19:31.299 }, 00:19:31.299 "peer_address": { 00:19:31.299 "trtype": "TCP", 00:19:31.299 "adrfam": "IPv4", 00:19:31.299 "traddr": "10.0.0.1", 00:19:31.299 "trsvcid": "36798" 00:19:31.299 }, 00:19:31.299 "auth": { 00:19:31.299 "state": "completed", 00:19:31.299 "digest": "sha512", 00:19:31.299 "dhgroup": "ffdhe8192" 00:19:31.299 } 00:19:31.299 } 00:19:31.299 ]' 00:19:31.299 10:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.559 10:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:31.559 10:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.559 10:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:31.559 10:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.559 10:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.559 10:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.559 10:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.819 10:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NjkzZWY4NDFkYTc2NzQzYWEwMWMxYTk5NjIxNmZlMjBiOTY4NWQ1OGFmNmE0MGFmYTY4ZTM1NGUwY2JmMjA1MSQUGYw=: 00:19:32.386 10:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.386 10:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:32.386 10:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.386 10:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.386 10:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.386 10:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:32.386 10:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:19:32.386 10:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:32.386 10:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:32.386 10:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:32.386 10:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:32.646 10:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:19:32.646 10:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.646 10:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:32.646 10:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:32.646 10:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:32.646 10:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.646 10:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.646 10:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.646 10:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.646 10:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.646 10:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.646 10:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.905 00:19:32.905 10:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:32.905 10:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.905 10:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.165 10:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.165 10:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.165 10:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.165 10:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.165 10:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.165 10:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.165 { 00:19:33.165 "cntlid": 145, 00:19:33.165 "qid": 0, 00:19:33.165 "state": "enabled", 00:19:33.165 "thread": "nvmf_tgt_poll_group_000", 00:19:33.165 "listen_address": { 00:19:33.165 "trtype": "TCP", 00:19:33.165 "adrfam": "IPv4", 00:19:33.165 "traddr": "10.0.0.2", 00:19:33.165 "trsvcid": "4420" 00:19:33.165 }, 00:19:33.165 "peer_address": { 00:19:33.165 "trtype": "TCP", 00:19:33.165 "adrfam": "IPv4", 00:19:33.165 "traddr": "10.0.0.1", 00:19:33.165 "trsvcid": "36822" 00:19:33.165 }, 00:19:33.165 "auth": { 00:19:33.165 "state": "completed", 00:19:33.165 "digest": "sha512", 00:19:33.165 "dhgroup": "ffdhe8192" 00:19:33.165 } 00:19:33.165 } 00:19:33.165 ]' 00:19:33.165 10:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.165 10:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:33.165 10:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.441 10:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:33.441 10:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.441 10:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.441 10:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.441 10:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.441 10:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDQ2ZmNmYWFmNDU0ZjJlNmMwZTU5N2IxODA4ZGZmMWFmZGU5ZGUwOWI3NGEzZjcztUvQ6w==: --dhchap-ctrl-secret DHHC-1:03:MWYzOGY1NTVkZWMxOWJlZTA5M2VjMWU3NWU2MDg3YjFkODFlMGQ3ZTlmNmVkYmRhYWUyMTc0ZmE0YjBjOGNhM2ZIXgs=: 00:19:34.382 10:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.382 10:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:34.382 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.382 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.382 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.382 10:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:34.382 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.382 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.382 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.382 10:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:34.382 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:34.382 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:34.382 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:34.382 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:34.382 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:34.382 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:34.383 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:34.383 10:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:34.642 request: 00:19:34.642 { 00:19:34.642 "name": "nvme0", 00:19:34.642 "trtype": "tcp", 00:19:34.642 "traddr": "10.0.0.2", 00:19:34.642 "adrfam": "ipv4", 00:19:34.642 "trsvcid": "4420", 00:19:34.642 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:34.642 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:34.642 "prchk_reftag": false, 00:19:34.642 "prchk_guard": false, 00:19:34.642 "hdgst": false, 00:19:34.642 "ddgst": false, 00:19:34.642 "dhchap_key": "key2", 00:19:34.642 "method": "bdev_nvme_attach_controller", 00:19:34.642 "req_id": 1 00:19:34.642 } 00:19:34.642 Got JSON-RPC error response 00:19:34.642 response: 00:19:34.642 { 00:19:34.642 "code": -5, 00:19:34.642 "message": "Input/output error" 00:19:34.642 } 00:19:34.642 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:34.642 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:34.642 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:34.642 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:34.642 10:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:34.642 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.642 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.642 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.642 10:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.642 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.642 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.642 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.642 10:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:34.642 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:34.642 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:34.642 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:34.642 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:34.642 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:34.642 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:34.642 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:34.643 10:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:35.214 request: 00:19:35.214 { 00:19:35.214 "name": "nvme0", 00:19:35.214 "trtype": "tcp", 00:19:35.214 "traddr": "10.0.0.2", 00:19:35.214 "adrfam": "ipv4", 00:19:35.214 "trsvcid": "4420", 00:19:35.214 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:35.214 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:35.214 "prchk_reftag": false, 00:19:35.214 "prchk_guard": false, 00:19:35.214 "hdgst": false, 00:19:35.214 "ddgst": false, 00:19:35.214 "dhchap_key": "key1", 00:19:35.214 "dhchap_ctrlr_key": "ckey2", 00:19:35.214 "method": "bdev_nvme_attach_controller", 00:19:35.214 "req_id": 1 00:19:35.214 } 00:19:35.214 Got JSON-RPC error response 00:19:35.214 response: 00:19:35.214 { 00:19:35.214 "code": -5, 00:19:35.214 "message": "Input/output error" 00:19:35.214 } 00:19:35.214 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:35.214 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:35.214 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:35.214 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:35.214 10:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:35.214 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.214 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.214 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.214 10:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:35.214 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.214 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.214 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.214 10:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.214 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:35.214 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.214 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:35.214 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:35.214 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:35.214 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:35.214 10:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.214 10:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.475 request: 00:19:35.475 { 00:19:35.475 "name": "nvme0", 00:19:35.475 "trtype": "tcp", 00:19:35.475 "traddr": "10.0.0.2", 00:19:35.475 "adrfam": "ipv4", 00:19:35.475 "trsvcid": "4420", 00:19:35.475 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:35.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:35.475 "prchk_reftag": false, 00:19:35.475 "prchk_guard": false, 00:19:35.475 "hdgst": false, 00:19:35.475 "ddgst": false, 00:19:35.475 "dhchap_key": "key1", 00:19:35.475 "dhchap_ctrlr_key": "ckey1", 00:19:35.475 "method": "bdev_nvme_attach_controller", 00:19:35.475 "req_id": 1 00:19:35.475 } 00:19:35.475 Got JSON-RPC error response 00:19:35.475 response: 00:19:35.475 { 00:19:35.475 "code": -5, 00:19:35.475 "message": "Input/output error" 00:19:35.475 } 00:19:35.475 10:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:35.475 10:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:35.475 10:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:35.475 10:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:35.475 10:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:35.475 10:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.475 10:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.475 10:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.475 10:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2090160 00:19:35.475 10:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2090160 ']' 00:19:35.475 10:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2090160 00:19:35.475 10:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:35.475 10:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:35.475 10:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2090160 00:19:35.735 10:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:35.735 10:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:35.735 10:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2090160' 00:19:35.735 killing process with pid 2090160 00:19:35.735 10:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2090160 00:19:35.735 10:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2090160 00:19:35.735 10:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:35.735 10:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:35.735 10:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:35.735 10:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.735 10:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2115527 00:19:35.735 10:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2115527 00:19:35.735 10:57:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:35.735 10:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2115527 ']' 00:19:35.735 10:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.735 10:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:35.735 10:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.735 10:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:35.735 10:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.676 10:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:36.676 10:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:36.676 10:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:36.676 10:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:36.676 10:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.676 10:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.676 10:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:36.676 10:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2115527 00:19:36.676 10:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2115527 ']' 00:19:36.676 10:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.676 10:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:36.676 10:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.676 10:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:36.676 10:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.676 10:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:36.676 10:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:36.676 10:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:19:36.676 10:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.676 10:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.936 10:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.936 10:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:19:36.936 10:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.936 10:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:36.936 10:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:36.936 10:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:36.936 10:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.936 10:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:36.936 10:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.936 10:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.936 10:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.936 10:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:36.936 10:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:37.197 00:19:37.457 10:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.457 10:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.457 10:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.457 10:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.457 10:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.457 10:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.457 10:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.457 10:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.457 10:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.457 { 00:19:37.457 "cntlid": 1, 00:19:37.457 "qid": 0, 00:19:37.457 "state": "enabled", 00:19:37.457 "thread": "nvmf_tgt_poll_group_000", 00:19:37.458 "listen_address": { 00:19:37.458 "trtype": "TCP", 00:19:37.458 "adrfam": "IPv4", 00:19:37.458 "traddr": "10.0.0.2", 00:19:37.458 "trsvcid": "4420" 00:19:37.458 }, 00:19:37.458 "peer_address": { 00:19:37.458 "trtype": "TCP", 00:19:37.458 "adrfam": "IPv4", 00:19:37.458 "traddr": "10.0.0.1", 00:19:37.458 "trsvcid": "36868" 00:19:37.458 }, 00:19:37.458 "auth": { 00:19:37.458 "state": "completed", 00:19:37.458 "digest": "sha512", 00:19:37.458 "dhgroup": "ffdhe8192" 00:19:37.458 } 00:19:37.458 } 00:19:37.458 ]' 00:19:37.458 10:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.458 10:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:37.458 10:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.718 10:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:37.718 10:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.718 10:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.718 10:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.718 10:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.718 10:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NjkzZWY4NDFkYTc2NzQzYWEwMWMxYTk5NjIxNmZlMjBiOTY4NWQ1OGFmNmE0MGFmYTY4ZTM1NGUwY2JmMjA1MSQUGYw=: 00:19:38.659 10:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.659 10:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:38.659 10:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.659 10:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.659 10:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.659 10:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:38.659 10:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.659 10:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.659 10:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.659 10:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:38.659 10:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:38.659 10:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:38.659 10:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:38.659 10:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:38.659 10:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:38.659 10:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:38.659 10:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:38.659 10:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:38.659 10:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:38.659 10:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:38.920 request: 00:19:38.920 { 00:19:38.920 "name": "nvme0", 00:19:38.920 "trtype": "tcp", 00:19:38.920 "traddr": "10.0.0.2", 00:19:38.920 "adrfam": "ipv4", 00:19:38.920 "trsvcid": "4420", 00:19:38.920 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:38.920 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:38.920 "prchk_reftag": false, 00:19:38.920 "prchk_guard": false, 00:19:38.920 "hdgst": false, 00:19:38.920 "ddgst": false, 00:19:38.920 "dhchap_key": "key3", 00:19:38.920 "method": "bdev_nvme_attach_controller", 00:19:38.920 "req_id": 1 00:19:38.920 } 00:19:38.920 Got JSON-RPC error response 00:19:38.920 response: 00:19:38.920 { 00:19:38.920 "code": -5, 00:19:38.920 "message": "Input/output error" 00:19:38.920 } 00:19:38.920 10:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:38.920 10:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:38.920 10:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:38.920 10:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:38.920 10:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:19:38.920 10:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:19:38.920 10:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:38.920 10:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:38.920 10:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:38.920 10:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:38.920 10:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:38.920 10:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:38.920 10:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:38.920 10:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:38.920 10:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:38.920 10:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:38.920 10:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:39.180 request: 00:19:39.180 { 00:19:39.180 "name": "nvme0", 00:19:39.180 "trtype": "tcp", 00:19:39.180 "traddr": "10.0.0.2", 00:19:39.180 "adrfam": "ipv4", 00:19:39.180 "trsvcid": "4420", 00:19:39.180 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:39.181 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:39.181 "prchk_reftag": false, 00:19:39.181 "prchk_guard": false, 00:19:39.181 "hdgst": false, 00:19:39.181 "ddgst": false, 00:19:39.181 "dhchap_key": "key3", 00:19:39.181 "method": "bdev_nvme_attach_controller", 00:19:39.181 "req_id": 1 00:19:39.181 } 00:19:39.181 Got JSON-RPC error response 00:19:39.181 response: 00:19:39.181 { 00:19:39.181 "code": -5, 00:19:39.181 "message": "Input/output error" 00:19:39.181 } 00:19:39.181 10:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:39.181 10:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:39.181 10:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:39.181 10:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:39.181 10:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:39.181 10:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:19:39.181 10:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:39.181 10:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:39.181 10:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:39.181 10:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:39.441 10:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:39.442 10:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.442 10:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.442 10:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.442 10:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:39.442 10:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.442 10:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.442 10:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.442 10:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:39.442 10:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:39.442 10:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:39.442 10:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:39.442 10:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:39.442 10:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:39.442 10:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:39.442 10:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:39.442 10:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:39.442 request: 00:19:39.442 { 00:19:39.442 "name": "nvme0", 00:19:39.442 "trtype": "tcp", 00:19:39.442 "traddr": "10.0.0.2", 00:19:39.442 "adrfam": "ipv4", 00:19:39.442 "trsvcid": "4420", 00:19:39.442 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:39.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:39.442 "prchk_reftag": false, 00:19:39.442 "prchk_guard": false, 00:19:39.442 "hdgst": false, 00:19:39.442 "ddgst": false, 00:19:39.442 "dhchap_key": "key0", 00:19:39.442 "dhchap_ctrlr_key": "key1", 00:19:39.442 "method": "bdev_nvme_attach_controller", 00:19:39.442 "req_id": 1 00:19:39.442 } 00:19:39.442 Got JSON-RPC error response 00:19:39.442 response: 00:19:39.442 { 00:19:39.442 "code": -5, 00:19:39.442 "message": "Input/output error" 00:19:39.442 } 00:19:39.718 10:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:39.718 10:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:39.718 10:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:39.718 10:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:39.718 10:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:39.718 10:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:39.718 00:19:39.718 10:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:19:39.718 10:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:19:39.718 10:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.983 10:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.983 10:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.983 10:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.243 10:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:19:40.243 10:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:19:40.243 10:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2090429 00:19:40.243 10:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2090429 ']' 00:19:40.243 10:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2090429 00:19:40.243 10:57:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:40.243 10:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:40.243 10:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2090429 00:19:40.243 10:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:40.243 10:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:40.243 10:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2090429' 00:19:40.243 killing process with pid 2090429 00:19:40.243 10:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2090429 00:19:40.243 10:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2090429 00:19:40.504 10:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:40.504 10:57:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:40.504 10:57:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:19:40.504 10:57:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:40.504 10:57:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:19:40.504 10:57:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:40.504 10:57:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:40.504 rmmod nvme_tcp 00:19:40.504 rmmod nvme_fabrics 00:19:40.504 rmmod nvme_keyring 00:19:40.504 10:57:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:40.504 10:57:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:19:40.504 10:57:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:19:40.504 10:57:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2115527 ']' 00:19:40.504 10:57:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2115527 00:19:40.504 10:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2115527 ']' 00:19:40.504 10:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2115527 00:19:40.504 10:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:40.504 10:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:40.504 10:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2115527 00:19:40.504 10:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:40.504 10:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:40.504 10:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2115527' 00:19:40.504 killing process with pid 2115527 00:19:40.504 10:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2115527 00:19:40.504 10:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2115527 00:19:40.504 10:57:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:40.504 10:57:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:40.504 10:57:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:40.504 10:57:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:40.504 10:57:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:40.504 10:57:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.504 10:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:40.504 10:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.048 10:57:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:43.048 10:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.ajI /tmp/spdk.key-sha256.AiM /tmp/spdk.key-sha384.V2r /tmp/spdk.key-sha512.dpp /tmp/spdk.key-sha512.bLZ /tmp/spdk.key-sha384.Bya /tmp/spdk.key-sha256.u5V '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:43.048 00:19:43.048 real 2m16.208s 00:19:43.048 user 5m5.282s 00:19:43.048 sys 0m22.010s 00:19:43.048 10:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:43.048 10:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.048 ************************************ 00:19:43.048 END TEST nvmf_auth_target 00:19:43.048 ************************************ 00:19:43.048 10:57:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:43.048 10:57:59 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:19:43.048 10:57:59 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:43.048 10:57:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:19:43.048 10:57:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:43.048 10:57:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:43.048 ************************************ 00:19:43.048 START TEST nvmf_bdevio_no_huge 00:19:43.048 ************************************ 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:43.048 * Looking for test storage... 00:19:43.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:19:43.048 10:57:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:51.190 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:51.190 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:19:51.190 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:51.190 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:51.190 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:51.190 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:51.190 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:51.190 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:19:51.190 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:51.190 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:19:51.190 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:19:51.190 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:19:51.190 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:19:51.190 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:19:51.190 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:19:51.190 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:51.190 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:51.190 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:51.190 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:51.190 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:51.191 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:51.191 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:51.191 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:51.191 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:51.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:51.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:19:51.191 00:19:51.191 --- 10.0.0.2 ping statistics --- 00:19:51.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.191 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:51.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:51.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:19:51.191 00:19:51.191 --- 10.0.0.1 ping statistics --- 00:19:51.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.191 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:51.191 10:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:51.191 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:51.191 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:51.191 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:51.191 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:51.191 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2120699 00:19:51.191 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2120699 00:19:51.191 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:51.191 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 2120699 ']' 00:19:51.191 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.191 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:51.191 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.191 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:51.191 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:51.191 [2024-07-12 10:58:07.085934] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:51.191 [2024-07-12 10:58:07.086009] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:51.191 [2024-07-12 10:58:07.181294] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:51.191 [2024-07-12 10:58:07.290105] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:51.191 [2024-07-12 10:58:07.290165] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:51.191 [2024-07-12 10:58:07.290174] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:51.191 [2024-07-12 10:58:07.290181] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:51.191 [2024-07-12 10:58:07.290187] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:51.191 [2024-07-12 10:58:07.290363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:51.191 [2024-07-12 10:58:07.290625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:19:51.191 [2024-07-12 10:58:07.290761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:51.191 [2024-07-12 10:58:07.290759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:19:51.191 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:51.191 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:19:51.191 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:51.191 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:51.191 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:51.191 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:51.192 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:51.192 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.192 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:51.192 [2024-07-12 10:58:07.937825] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:51.192 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.192 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:51.192 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.192 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:51.192 Malloc0 00:19:51.192 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.192 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:51.192 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.192 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:51.192 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.192 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:51.192 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.192 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:51.192 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.192 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:51.192 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.192 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:51.192 [2024-07-12 10:58:07.991452] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:51.192 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.192 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:51.192 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:51.192 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:19:51.192 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:19:51.192 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:51.192 10:58:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:51.192 { 00:19:51.192 "params": { 00:19:51.192 "name": "Nvme$subsystem", 00:19:51.192 "trtype": "$TEST_TRANSPORT", 00:19:51.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:51.192 "adrfam": "ipv4", 00:19:51.192 "trsvcid": "$NVMF_PORT", 00:19:51.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:51.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:51.192 "hdgst": ${hdgst:-false}, 00:19:51.192 "ddgst": ${ddgst:-false} 00:19:51.192 }, 00:19:51.192 "method": "bdev_nvme_attach_controller" 00:19:51.192 } 00:19:51.192 EOF 00:19:51.192 )") 00:19:51.192 10:58:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:19:51.192 10:58:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:19:51.192 10:58:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:19:51.192 10:58:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:51.192 "params": { 00:19:51.192 "name": "Nvme1", 00:19:51.192 "trtype": "tcp", 00:19:51.192 "traddr": "10.0.0.2", 00:19:51.192 "adrfam": "ipv4", 00:19:51.192 "trsvcid": "4420", 00:19:51.192 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.192 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:51.192 "hdgst": false, 00:19:51.192 "ddgst": false 00:19:51.192 }, 00:19:51.192 "method": "bdev_nvme_attach_controller" 00:19:51.192 }' 00:19:51.192 [2024-07-12 10:58:08.048426] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:51.192 [2024-07-12 10:58:08.048503] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2120794 ] 00:19:51.192 [2024-07-12 10:58:08.134533] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:51.454 [2024-07-12 10:58:08.240187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.454 [2024-07-12 10:58:08.240256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.454 [2024-07-12 10:58:08.240276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.454 I/O targets: 00:19:51.454 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:51.454 00:19:51.454 00:19:51.454 CUnit - A unit testing framework for C - Version 2.1-3 00:19:51.454 http://cunit.sourceforge.net/ 00:19:51.454 00:19:51.454 00:19:51.454 Suite: bdevio tests on: Nvme1n1 00:19:51.715 Test: blockdev write read block ...passed 00:19:51.715 Test: blockdev write zeroes read block ...passed 00:19:51.715 Test: blockdev write zeroes read no split ...passed 00:19:51.715 Test: blockdev write zeroes read split ...passed 00:19:51.715 Test: blockdev write zeroes read split partial ...passed 00:19:51.715 Test: blockdev reset ...[2024-07-12 10:58:08.585919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:51.715 [2024-07-12 10:58:08.586016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x59fc10 (9): Bad file descriptor 00:19:51.715 [2024-07-12 10:58:08.600607] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:51.715 passed 00:19:51.715 Test: blockdev write read 8 blocks ...passed 00:19:51.715 Test: blockdev write read size > 128k ...passed 00:19:51.715 Test: blockdev write read invalid size ...passed 00:19:51.976 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:51.976 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:51.976 Test: blockdev write read max offset ...passed 00:19:51.976 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:51.976 Test: blockdev writev readv 8 blocks ...passed 00:19:51.976 Test: blockdev writev readv 30 x 1block ...passed 00:19:51.976 Test: blockdev writev readv block ...passed 00:19:51.976 Test: blockdev writev readv size > 128k ...passed 00:19:51.976 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:51.976 Test: blockdev comparev and writev ...[2024-07-12 10:58:08.863200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:51.976 [2024-07-12 10:58:08.863248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:51.976 [2024-07-12 10:58:08.863265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:51.976 [2024-07-12 10:58:08.863274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:51.976 [2024-07-12 10:58:08.863739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:51.976 [2024-07-12 10:58:08.863752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:51.976 [2024-07-12 10:58:08.863766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:51.976 [2024-07-12 10:58:08.863774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:51.976 [2024-07-12 10:58:08.864259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:51.976 [2024-07-12 10:58:08.864270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:51.977 [2024-07-12 10:58:08.864284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:51.977 [2024-07-12 10:58:08.864291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:51.977 [2024-07-12 10:58:08.864717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:51.977 [2024-07-12 10:58:08.864730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:51.977 [2024-07-12 10:58:08.864743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:51.977 [2024-07-12 10:58:08.864752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:51.977 passed 00:19:51.977 Test: blockdev nvme passthru rw ...passed 00:19:51.977 Test: blockdev nvme passthru vendor specific ...[2024-07-12 10:58:08.948610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:51.977 [2024-07-12 10:58:08.948626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:51.977 [2024-07-12 10:58:08.948860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:51.977 [2024-07-12 10:58:08.948872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:51.977 [2024-07-12 10:58:08.949146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:51.977 [2024-07-12 10:58:08.949157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:51.977 [2024-07-12 10:58:08.949427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:51.977 [2024-07-12 10:58:08.949437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:51.977 passed 00:19:52.237 Test: blockdev nvme admin passthru ...passed 00:19:52.237 Test: blockdev copy ...passed 00:19:52.237 00:19:52.237 Run Summary: Type Total Ran Passed Failed Inactive 00:19:52.237 suites 1 1 n/a 0 0 00:19:52.237 tests 23 23 23 0 0 00:19:52.237 asserts 152 152 152 0 n/a 00:19:52.237 00:19:52.237 Elapsed time = 1.239 seconds 00:19:52.498 10:58:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:52.498 10:58:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.498 10:58:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:52.498 10:58:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.498 10:58:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:52.498 10:58:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:52.498 10:58:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:52.498 10:58:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:19:52.498 10:58:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:52.498 10:58:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:19:52.498 10:58:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:52.498 10:58:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:52.498 rmmod nvme_tcp 00:19:52.498 rmmod nvme_fabrics 00:19:52.498 rmmod nvme_keyring 00:19:52.498 10:58:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:52.498 10:58:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:19:52.498 10:58:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:19:52.498 10:58:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2120699 ']' 00:19:52.498 10:58:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2120699 00:19:52.498 10:58:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 2120699 ']' 00:19:52.498 10:58:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 2120699 00:19:52.498 10:58:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:19:52.498 10:58:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:52.498 10:58:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2120699 00:19:52.498 10:58:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:19:52.498 10:58:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:19:52.498 10:58:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2120699' 00:19:52.498 killing process with pid 2120699 00:19:52.498 10:58:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 2120699 00:19:52.498 10:58:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 2120699 00:19:52.759 10:58:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:52.759 10:58:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:52.759 10:58:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:52.759 10:58:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:52.759 10:58:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:52.759 10:58:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.759 10:58:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:52.759 10:58:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.305 10:58:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:55.305 00:19:55.305 real 0m12.166s 00:19:55.305 user 0m13.615s 00:19:55.305 sys 0m6.412s 00:19:55.305 10:58:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:55.305 10:58:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:55.305 ************************************ 00:19:55.305 END TEST nvmf_bdevio_no_huge 00:19:55.305 ************************************ 00:19:55.305 10:58:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:55.305 10:58:11 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:55.305 10:58:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:55.305 10:58:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:55.305 10:58:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:55.305 ************************************ 00:19:55.305 START TEST nvmf_tls 00:19:55.305 ************************************ 00:19:55.305 10:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:55.305 * Looking for test storage... 00:19:55.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:55.305 10:58:11 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:55.305 10:58:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:55.305 10:58:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:55.305 10:58:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:55.305 10:58:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:55.305 10:58:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:55.305 10:58:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:55.305 10:58:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:55.305 10:58:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:55.305 10:58:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:55.305 10:58:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:55.305 10:58:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:55.305 10:58:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:55.305 10:58:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:55.305 10:58:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:55.305 10:58:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:55.306 10:58:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:55.306 10:58:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:55.306 10:58:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:55.306 10:58:12 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:55.306 10:58:12 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:55.306 10:58:12 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:55.306 10:58:12 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.306 10:58:12 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.306 10:58:12 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.306 10:58:12 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:55.306 10:58:12 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.306 10:58:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:19:55.306 10:58:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:55.306 10:58:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:55.306 10:58:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:55.306 10:58:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:55.306 10:58:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:55.306 10:58:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:55.306 10:58:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:55.306 10:58:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:55.306 10:58:12 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:55.306 10:58:12 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:19:55.306 10:58:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:55.306 10:58:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:55.306 10:58:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:55.306 10:58:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:55.306 10:58:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:55.306 10:58:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:55.306 10:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:55.306 10:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.306 10:58:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:55.306 10:58:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:55.306 10:58:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:19:55.306 10:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.463 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:03.463 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:03.464 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:03.464 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:03.464 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:03.464 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:03.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:03.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.598 ms 00:20:03.464 00:20:03.464 --- 10.0.0.2 ping statistics --- 00:20:03.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.464 rtt min/avg/max/mdev = 0.598/0.598/0.598/0.000 ms 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:03.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:03.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.353 ms 00:20:03.464 00:20:03.464 --- 10.0.0.1 ping statistics --- 00:20:03.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.464 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2125263 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2125263 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2125263 ']' 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:03.464 10:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.465 10:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:03.465 10:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.465 [2024-07-12 10:58:19.443003] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:03.465 [2024-07-12 10:58:19.443066] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.465 EAL: No free 2048 kB hugepages reported on node 1 00:20:03.465 [2024-07-12 10:58:19.532658] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.465 [2024-07-12 10:58:19.625999] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.465 [2024-07-12 10:58:19.626056] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.465 [2024-07-12 10:58:19.626064] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.465 [2024-07-12 10:58:19.626071] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.465 [2024-07-12 10:58:19.626082] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.465 [2024-07-12 10:58:19.626115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.465 10:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:03.465 10:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:03.465 10:58:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:03.465 10:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:03.465 10:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.465 10:58:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:03.465 10:58:20 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:03.465 10:58:20 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:03.465 true 00:20:03.726 10:58:20 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:03.726 10:58:20 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:03.726 10:58:20 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:03.726 10:58:20 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:03.726 10:58:20 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:03.987 10:58:20 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:03.987 10:58:20 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:03.987 10:58:20 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:03.987 10:58:20 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:03.987 10:58:20 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:04.248 10:58:21 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:04.249 10:58:21 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:04.510 10:58:21 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:04.510 10:58:21 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:04.510 10:58:21 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:04.510 10:58:21 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:04.510 10:58:21 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:04.510 10:58:21 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:04.510 10:58:21 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:04.771 10:58:21 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:04.771 10:58:21 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:05.032 10:58:21 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:05.032 10:58:21 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:05.032 10:58:21 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:05.032 10:58:21 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:05.032 10:58:21 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:05.294 10:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:05.294 10:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:05.294 10:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:05.294 10:58:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:05.294 10:58:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:05.294 10:58:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:05.294 10:58:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:05.294 10:58:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:05.294 10:58:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:05.294 10:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:05.294 10:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:05.294 10:58:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:05.294 10:58:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:05.294 10:58:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:05.294 10:58:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:05.294 10:58:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:05.294 10:58:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:05.294 10:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:05.294 10:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:05.294 10:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.sJ34bzoBoy 00:20:05.294 10:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:05.294 10:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.MIUccjhF3I 00:20:05.294 10:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:05.294 10:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:05.294 10:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.sJ34bzoBoy 00:20:05.294 10:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.MIUccjhF3I 00:20:05.294 10:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:05.554 10:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:05.816 10:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.sJ34bzoBoy 00:20:05.816 10:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.sJ34bzoBoy 00:20:05.816 10:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:05.816 [2024-07-12 10:58:22.717875] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:05.816 10:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:06.077 10:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:06.077 [2024-07-12 10:58:23.022611] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:06.077 [2024-07-12 10:58:23.022793] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:06.077 10:58:23 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:06.336 malloc0 00:20:06.337 10:58:23 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:06.597 10:58:23 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sJ34bzoBoy 00:20:06.597 [2024-07-12 10:58:23.469630] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:06.597 10:58:23 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.sJ34bzoBoy 00:20:06.597 EAL: No free 2048 kB hugepages reported on node 1 00:20:18.827 Initializing NVMe Controllers 00:20:18.827 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:18.827 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:18.827 Initialization complete. Launching workers. 00:20:18.827 ======================================================== 00:20:18.827 Latency(us) 00:20:18.827 Device Information : IOPS MiB/s Average min max 00:20:18.827 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19126.72 74.71 3346.32 981.98 6854.17 00:20:18.827 ======================================================== 00:20:18.827 Total : 19126.72 74.71 3346.32 981.98 6854.17 00:20:18.827 00:20:18.827 10:58:33 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sJ34bzoBoy 00:20:18.827 10:58:33 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:18.827 10:58:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:18.827 10:58:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:18.827 10:58:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.sJ34bzoBoy' 00:20:18.827 10:58:33 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:18.827 10:58:33 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2128135 00:20:18.827 10:58:33 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:18.827 10:58:33 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2128135 /var/tmp/bdevperf.sock 00:20:18.827 10:58:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2128135 ']' 00:20:18.827 10:58:33 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:18.827 10:58:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:18.827 10:58:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:18.827 10:58:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:18.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:18.827 10:58:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:18.827 10:58:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.827 [2024-07-12 10:58:33.666643] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:18.827 [2024-07-12 10:58:33.666697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2128135 ] 00:20:18.827 EAL: No free 2048 kB hugepages reported on node 1 00:20:18.827 [2024-07-12 10:58:33.741045] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.827 [2024-07-12 10:58:33.804296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:18.827 10:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:18.827 10:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:18.828 10:58:34 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sJ34bzoBoy 00:20:18.828 [2024-07-12 10:58:34.573739] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:18.828 [2024-07-12 10:58:34.573812] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:18.828 TLSTESTn1 00:20:18.828 10:58:34 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:18.828 Running I/O for 10 seconds... 00:20:28.831 00:20:28.831 Latency(us) 00:20:28.831 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.831 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:28.831 Verification LBA range: start 0x0 length 0x2000 00:20:28.831 TLSTESTn1 : 10.02 4667.11 18.23 0.00 0.00 27374.63 9393.49 110974.29 00:20:28.831 =================================================================================================================== 00:20:28.831 Total : 4667.11 18.23 0.00 0.00 27374.63 9393.49 110974.29 00:20:28.831 0 00:20:28.831 10:58:44 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:28.831 10:58:44 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2128135 00:20:28.831 10:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2128135 ']' 00:20:28.831 10:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2128135 00:20:28.831 10:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:28.831 10:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:28.831 10:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2128135 00:20:28.831 10:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:28.831 10:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:28.831 10:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2128135' 00:20:28.831 killing process with pid 2128135 00:20:28.831 10:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2128135 00:20:28.831 Received shutdown signal, test time was about 10.000000 seconds 00:20:28.831 00:20:28.831 Latency(us) 00:20:28.831 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.831 =================================================================================================================== 00:20:28.831 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:28.831 [2024-07-12 10:58:44.884615] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:28.831 10:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2128135 00:20:28.831 10:58:44 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MIUccjhF3I 00:20:28.831 10:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:28.831 10:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MIUccjhF3I 00:20:28.831 10:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:28.831 10:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:28.831 10:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:28.831 10:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:28.831 10:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MIUccjhF3I 00:20:28.831 10:58:44 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:28.831 10:58:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:28.831 10:58:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:28.831 10:58:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.MIUccjhF3I' 00:20:28.831 10:58:44 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:28.831 10:58:45 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2130196 00:20:28.831 10:58:45 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:28.831 10:58:45 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2130196 /var/tmp/bdevperf.sock 00:20:28.831 10:58:45 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:28.831 10:58:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2130196 ']' 00:20:28.831 10:58:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:28.831 10:58:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:28.831 10:58:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:28.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:28.831 10:58:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:28.831 10:58:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:28.831 [2024-07-12 10:58:45.059001] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:28.831 [2024-07-12 10:58:45.059058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2130196 ] 00:20:28.831 EAL: No free 2048 kB hugepages reported on node 1 00:20:28.831 [2024-07-12 10:58:45.135500] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.831 [2024-07-12 10:58:45.186567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:29.117 10:58:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:29.117 10:58:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:29.117 10:58:45 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.MIUccjhF3I 00:20:29.117 [2024-07-12 10:58:45.963313] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:29.117 [2024-07-12 10:58:45.963378] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:29.117 [2024-07-12 10:58:45.968523] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:29.117 [2024-07-12 10:58:45.968557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5a5ec0 (107): Transport endpoint is not connected 00:20:29.117 [2024-07-12 10:58:45.969529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5a5ec0 (9): Bad file descriptor 00:20:29.117 [2024-07-12 10:58:45.970530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:29.117 [2024-07-12 10:58:45.970538] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:29.117 [2024-07-12 10:58:45.970545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:29.117 request: 00:20:29.117 { 00:20:29.117 "name": "TLSTEST", 00:20:29.117 "trtype": "tcp", 00:20:29.117 "traddr": "10.0.0.2", 00:20:29.117 "adrfam": "ipv4", 00:20:29.117 "trsvcid": "4420", 00:20:29.117 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.117 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:29.117 "prchk_reftag": false, 00:20:29.117 "prchk_guard": false, 00:20:29.117 "hdgst": false, 00:20:29.117 "ddgst": false, 00:20:29.117 "psk": "/tmp/tmp.MIUccjhF3I", 00:20:29.117 "method": "bdev_nvme_attach_controller", 00:20:29.117 "req_id": 1 00:20:29.117 } 00:20:29.117 Got JSON-RPC error response 00:20:29.117 response: 00:20:29.117 { 00:20:29.117 "code": -5, 00:20:29.117 "message": "Input/output error" 00:20:29.117 } 00:20:29.117 10:58:45 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2130196 00:20:29.117 10:58:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2130196 ']' 00:20:29.117 10:58:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2130196 00:20:29.117 10:58:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:29.117 10:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:29.117 10:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2130196 00:20:29.117 10:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:29.117 10:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:29.117 10:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2130196' 00:20:29.117 killing process with pid 2130196 00:20:29.117 10:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2130196 00:20:29.117 Received shutdown signal, test time was about 10.000000 seconds 00:20:29.117 00:20:29.117 Latency(us) 00:20:29.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.117 =================================================================================================================== 00:20:29.117 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:29.117 [2024-07-12 10:58:46.054286] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:29.117 10:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2130196 00:20:29.400 10:58:46 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:29.400 10:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:29.400 10:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:29.400 10:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:29.400 10:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:29.400 10:58:46 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.sJ34bzoBoy 00:20:29.400 10:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:29.400 10:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.sJ34bzoBoy 00:20:29.400 10:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:29.400 10:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:29.400 10:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:29.400 10:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:29.400 10:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.sJ34bzoBoy 00:20:29.400 10:58:46 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:29.400 10:58:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:29.400 10:58:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:29.400 10:58:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.sJ34bzoBoy' 00:20:29.400 10:58:46 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:29.400 10:58:46 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2130538 00:20:29.400 10:58:46 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:29.400 10:58:46 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2130538 /var/tmp/bdevperf.sock 00:20:29.400 10:58:46 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:29.400 10:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2130538 ']' 00:20:29.400 10:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:29.400 10:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:29.400 10:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:29.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:29.400 10:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:29.400 10:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.400 [2024-07-12 10:58:46.220650] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:29.400 [2024-07-12 10:58:46.220751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2130538 ] 00:20:29.400 EAL: No free 2048 kB hugepages reported on node 1 00:20:29.400 [2024-07-12 10:58:46.300692] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.400 [2024-07-12 10:58:46.352666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:30.341 10:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:30.341 10:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:30.341 10:58:46 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.sJ34bzoBoy 00:20:30.341 [2024-07-12 10:58:47.129442] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:30.341 [2024-07-12 10:58:47.129510] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:30.341 [2024-07-12 10:58:47.133666] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:30.341 [2024-07-12 10:58:47.133686] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:30.341 [2024-07-12 10:58:47.133705] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:30.341 [2024-07-12 10:58:47.134479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfddec0 (107): Transport endpoint is not connected 00:20:30.341 [2024-07-12 10:58:47.135473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfddec0 (9): Bad file descriptor 00:20:30.341 [2024-07-12 10:58:47.136475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:30.341 [2024-07-12 10:58:47.136482] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:30.341 [2024-07-12 10:58:47.136489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:30.341 request: 00:20:30.341 { 00:20:30.341 "name": "TLSTEST", 00:20:30.341 "trtype": "tcp", 00:20:30.341 "traddr": "10.0.0.2", 00:20:30.341 "adrfam": "ipv4", 00:20:30.341 "trsvcid": "4420", 00:20:30.341 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:30.341 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:30.341 "prchk_reftag": false, 00:20:30.341 "prchk_guard": false, 00:20:30.341 "hdgst": false, 00:20:30.341 "ddgst": false, 00:20:30.341 "psk": "/tmp/tmp.sJ34bzoBoy", 00:20:30.341 "method": "bdev_nvme_attach_controller", 00:20:30.341 "req_id": 1 00:20:30.341 } 00:20:30.341 Got JSON-RPC error response 00:20:30.341 response: 00:20:30.341 { 00:20:30.341 "code": -5, 00:20:30.341 "message": "Input/output error" 00:20:30.341 } 00:20:30.341 10:58:47 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2130538 00:20:30.341 10:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2130538 ']' 00:20:30.341 10:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2130538 00:20:30.341 10:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:30.341 10:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:30.341 10:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2130538 00:20:30.341 10:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:30.341 10:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:30.341 10:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2130538' 00:20:30.341 killing process with pid 2130538 00:20:30.341 10:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2130538 00:20:30.341 Received shutdown signal, test time was about 10.000000 seconds 00:20:30.341 00:20:30.341 Latency(us) 00:20:30.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.341 =================================================================================================================== 00:20:30.341 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:30.341 [2024-07-12 10:58:47.220534] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:30.341 10:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2130538 00:20:30.341 10:58:47 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:30.341 10:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:30.341 10:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:30.341 10:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:30.341 10:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:30.341 10:58:47 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.sJ34bzoBoy 00:20:30.341 10:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:30.341 10:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.sJ34bzoBoy 00:20:30.341 10:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:30.602 10:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:30.602 10:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:30.602 10:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:30.602 10:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.sJ34bzoBoy 00:20:30.602 10:58:47 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:30.602 10:58:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:30.602 10:58:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:30.602 10:58:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.sJ34bzoBoy' 00:20:30.602 10:58:47 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:30.602 10:58:47 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2130773 00:20:30.602 10:58:47 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:30.602 10:58:47 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2130773 /var/tmp/bdevperf.sock 00:20:30.602 10:58:47 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:30.602 10:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2130773 ']' 00:20:30.602 10:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:30.602 10:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:30.602 10:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:30.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:30.602 10:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:30.602 10:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.602 [2024-07-12 10:58:47.376488] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:30.602 [2024-07-12 10:58:47.376543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2130773 ] 00:20:30.602 EAL: No free 2048 kB hugepages reported on node 1 00:20:30.602 [2024-07-12 10:58:47.453471] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.602 [2024-07-12 10:58:47.506098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:31.172 10:58:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:31.172 10:58:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:31.172 10:58:48 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sJ34bzoBoy 00:20:31.433 [2024-07-12 10:58:48.278831] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:31.433 [2024-07-12 10:58:48.278898] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:31.433 [2024-07-12 10:58:48.284070] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:31.433 [2024-07-12 10:58:48.284087] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:31.433 [2024-07-12 10:58:48.284106] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:31.433 [2024-07-12 10:58:48.284913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa09ec0 (107): Transport endpoint is not connected 00:20:31.433 [2024-07-12 10:58:48.285907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa09ec0 (9): Bad file descriptor 00:20:31.433 [2024-07-12 10:58:48.286909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:31.433 [2024-07-12 10:58:48.286917] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:31.433 [2024-07-12 10:58:48.286924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:31.433 request: 00:20:31.433 { 00:20:31.433 "name": "TLSTEST", 00:20:31.433 "trtype": "tcp", 00:20:31.433 "traddr": "10.0.0.2", 00:20:31.433 "adrfam": "ipv4", 00:20:31.433 "trsvcid": "4420", 00:20:31.433 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:31.433 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:31.433 "prchk_reftag": false, 00:20:31.433 "prchk_guard": false, 00:20:31.433 "hdgst": false, 00:20:31.433 "ddgst": false, 00:20:31.433 "psk": "/tmp/tmp.sJ34bzoBoy", 00:20:31.433 "method": "bdev_nvme_attach_controller", 00:20:31.433 "req_id": 1 00:20:31.433 } 00:20:31.433 Got JSON-RPC error response 00:20:31.433 response: 00:20:31.433 { 00:20:31.433 "code": -5, 00:20:31.433 "message": "Input/output error" 00:20:31.433 } 00:20:31.433 10:58:48 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2130773 00:20:31.433 10:58:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2130773 ']' 00:20:31.433 10:58:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2130773 00:20:31.433 10:58:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:31.433 10:58:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:31.433 10:58:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2130773 00:20:31.433 10:58:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:31.433 10:58:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:31.433 10:58:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2130773' 00:20:31.433 killing process with pid 2130773 00:20:31.433 10:58:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2130773 00:20:31.433 Received shutdown signal, test time was about 10.000000 seconds 00:20:31.433 00:20:31.433 Latency(us) 00:20:31.433 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.433 =================================================================================================================== 00:20:31.433 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:31.433 [2024-07-12 10:58:48.372192] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:31.433 10:58:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2130773 00:20:31.693 10:58:48 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:31.693 10:58:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:31.693 10:58:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:31.693 10:58:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:31.693 10:58:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:31.693 10:58:48 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:31.694 10:58:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:31.694 10:58:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:31.694 10:58:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:31.694 10:58:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:31.694 10:58:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:31.694 10:58:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:31.694 10:58:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:31.694 10:58:48 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:31.694 10:58:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:31.694 10:58:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:31.694 10:58:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:31.694 10:58:48 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:31.694 10:58:48 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2130902 00:20:31.694 10:58:48 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:31.694 10:58:48 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2130902 /var/tmp/bdevperf.sock 00:20:31.694 10:58:48 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:31.694 10:58:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2130902 ']' 00:20:31.694 10:58:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:31.694 10:58:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:31.694 10:58:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:31.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:31.694 10:58:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:31.694 10:58:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:31.694 [2024-07-12 10:58:48.529933] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:31.694 [2024-07-12 10:58:48.529985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2130902 ] 00:20:31.694 EAL: No free 2048 kB hugepages reported on node 1 00:20:31.694 [2024-07-12 10:58:48.607096] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.694 [2024-07-12 10:58:48.659866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:32.635 10:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:32.635 10:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:32.635 10:58:49 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:32.635 [2024-07-12 10:58:49.446911] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:32.635 [2024-07-12 10:58:49.448297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c84a0 (9): Bad file descriptor 00:20:32.635 [2024-07-12 10:58:49.449294] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:32.635 [2024-07-12 10:58:49.449305] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:32.635 [2024-07-12 10:58:49.449312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:32.635 request: 00:20:32.635 { 00:20:32.635 "name": "TLSTEST", 00:20:32.635 "trtype": "tcp", 00:20:32.635 "traddr": "10.0.0.2", 00:20:32.635 "adrfam": "ipv4", 00:20:32.635 "trsvcid": "4420", 00:20:32.635 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:32.635 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:32.635 "prchk_reftag": false, 00:20:32.635 "prchk_guard": false, 00:20:32.635 "hdgst": false, 00:20:32.635 "ddgst": false, 00:20:32.635 "method": "bdev_nvme_attach_controller", 00:20:32.635 "req_id": 1 00:20:32.635 } 00:20:32.635 Got JSON-RPC error response 00:20:32.635 response: 00:20:32.635 { 00:20:32.635 "code": -5, 00:20:32.635 "message": "Input/output error" 00:20:32.635 } 00:20:32.635 10:58:49 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2130902 00:20:32.635 10:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2130902 ']' 00:20:32.635 10:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2130902 00:20:32.635 10:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:32.635 10:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:32.635 10:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2130902 00:20:32.635 10:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:32.635 10:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:32.635 10:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2130902' 00:20:32.635 killing process with pid 2130902 00:20:32.635 10:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2130902 00:20:32.635 Received shutdown signal, test time was about 10.000000 seconds 00:20:32.635 00:20:32.635 Latency(us) 00:20:32.635 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.635 =================================================================================================================== 00:20:32.635 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:32.635 10:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2130902 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 2125263 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2125263 ']' 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2125263 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2125263 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2125263' 00:20:32.896 killing process with pid 2125263 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2125263 00:20:32.896 [2024-07-12 10:58:49.694657] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2125263 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.3WcvWImN9P 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.3WcvWImN9P 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2131248 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2131248 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2131248 ']' 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:32.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:32.896 10:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.157 [2024-07-12 10:58:49.925287] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:33.157 [2024-07-12 10:58:49.925340] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.157 EAL: No free 2048 kB hugepages reported on node 1 00:20:33.157 [2024-07-12 10:58:50.007396] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.157 [2024-07-12 10:58:50.065125] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:33.157 [2024-07-12 10:58:50.065158] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:33.157 [2024-07-12 10:58:50.065163] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:33.157 [2024-07-12 10:58:50.065168] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:33.157 [2024-07-12 10:58:50.065173] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:33.157 [2024-07-12 10:58:50.065190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:33.728 10:58:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:33.728 10:58:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:33.728 10:58:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:33.728 10:58:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:33.728 10:58:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.989 10:58:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:33.990 10:58:50 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.3WcvWImN9P 00:20:33.990 10:58:50 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.3WcvWImN9P 00:20:33.990 10:58:50 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:33.990 [2024-07-12 10:58:50.884169] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:33.990 10:58:50 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:34.251 10:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:34.251 [2024-07-12 10:58:51.176877] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:34.251 [2024-07-12 10:58:51.177064] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:34.251 10:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:34.511 malloc0 00:20:34.511 10:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:34.772 10:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3WcvWImN9P 00:20:34.772 [2024-07-12 10:58:51.675921] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:34.772 10:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3WcvWImN9P 00:20:34.772 10:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:34.772 10:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:34.772 10:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:34.772 10:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3WcvWImN9P' 00:20:34.772 10:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:34.772 10:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2131613 00:20:34.772 10:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:34.772 10:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2131613 /var/tmp/bdevperf.sock 00:20:34.772 10:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:34.772 10:58:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2131613 ']' 00:20:34.772 10:58:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:34.772 10:58:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:34.772 10:58:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:34.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:34.772 10:58:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:34.772 10:58:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.032 [2024-07-12 10:58:51.768313] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:35.032 [2024-07-12 10:58:51.768380] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2131613 ] 00:20:35.032 EAL: No free 2048 kB hugepages reported on node 1 00:20:35.032 [2024-07-12 10:58:51.843385] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.032 [2024-07-12 10:58:51.894894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:35.603 10:58:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:35.603 10:58:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:35.603 10:58:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3WcvWImN9P 00:20:35.864 [2024-07-12 10:58:52.655366] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:35.864 [2024-07-12 10:58:52.655427] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:35.864 TLSTESTn1 00:20:35.864 10:58:52 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:35.864 Running I/O for 10 seconds... 00:20:48.099 00:20:48.099 Latency(us) 00:20:48.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.099 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:48.099 Verification LBA range: start 0x0 length 0x2000 00:20:48.099 TLSTESTn1 : 10.06 4653.28 18.18 0.00 0.00 27415.68 5652.48 96556.37 00:20:48.099 =================================================================================================================== 00:20:48.099 Total : 4653.28 18.18 0.00 0.00 27415.68 5652.48 96556.37 00:20:48.099 0 00:20:48.099 10:59:02 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:48.099 10:59:02 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2131613 00:20:48.099 10:59:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2131613 ']' 00:20:48.099 10:59:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2131613 00:20:48.099 10:59:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:48.099 10:59:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:48.099 10:59:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2131613 00:20:48.099 10:59:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:48.099 10:59:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:48.099 10:59:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2131613' 00:20:48.099 killing process with pid 2131613 00:20:48.099 10:59:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2131613 00:20:48.099 Received shutdown signal, test time was about 10.000000 seconds 00:20:48.099 00:20:48.099 Latency(us) 00:20:48.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.099 =================================================================================================================== 00:20:48.099 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:48.099 [2024-07-12 10:59:02.997969] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:48.099 10:59:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2131613 00:20:48.099 10:59:03 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.3WcvWImN9P 00:20:48.099 10:59:03 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3WcvWImN9P 00:20:48.099 10:59:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:48.099 10:59:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3WcvWImN9P 00:20:48.099 10:59:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:48.099 10:59:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:48.099 10:59:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:48.100 10:59:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:48.100 10:59:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3WcvWImN9P 00:20:48.100 10:59:03 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:48.100 10:59:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:48.100 10:59:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:48.100 10:59:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3WcvWImN9P' 00:20:48.100 10:59:03 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:48.100 10:59:03 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2133946 00:20:48.100 10:59:03 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:48.100 10:59:03 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2133946 /var/tmp/bdevperf.sock 00:20:48.100 10:59:03 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:48.100 10:59:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2133946 ']' 00:20:48.100 10:59:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:48.100 10:59:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:48.100 10:59:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:48.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:48.100 10:59:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:48.100 10:59:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.100 [2024-07-12 10:59:03.167454] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:48.100 [2024-07-12 10:59:03.167511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2133946 ] 00:20:48.100 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.100 [2024-07-12 10:59:03.241368] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.100 [2024-07-12 10:59:03.292932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:48.100 10:59:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:48.100 10:59:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:48.100 10:59:03 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3WcvWImN9P 00:20:48.100 [2024-07-12 10:59:04.065563] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:48.100 [2024-07-12 10:59:04.065600] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:48.100 [2024-07-12 10:59:04.065606] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.3WcvWImN9P 00:20:48.100 request: 00:20:48.100 { 00:20:48.100 "name": "TLSTEST", 00:20:48.100 "trtype": "tcp", 00:20:48.100 "traddr": "10.0.0.2", 00:20:48.100 "adrfam": "ipv4", 00:20:48.100 "trsvcid": "4420", 00:20:48.100 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.100 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:48.100 "prchk_reftag": false, 00:20:48.100 "prchk_guard": false, 00:20:48.100 "hdgst": false, 00:20:48.100 "ddgst": false, 00:20:48.100 "psk": "/tmp/tmp.3WcvWImN9P", 00:20:48.100 "method": "bdev_nvme_attach_controller", 00:20:48.100 "req_id": 1 00:20:48.100 } 00:20:48.100 Got JSON-RPC error response 00:20:48.100 response: 00:20:48.100 { 00:20:48.100 "code": -1, 00:20:48.100 "message": "Operation not permitted" 00:20:48.100 } 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2133946 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2133946 ']' 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2133946 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2133946 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2133946' 00:20:48.100 killing process with pid 2133946 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2133946 00:20:48.100 Received shutdown signal, test time was about 10.000000 seconds 00:20:48.100 00:20:48.100 Latency(us) 00:20:48.100 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.100 =================================================================================================================== 00:20:48.100 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2133946 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 2131248 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2131248 ']' 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2131248 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2131248 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2131248' 00:20:48.100 killing process with pid 2131248 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2131248 00:20:48.100 [2024-07-12 10:59:04.297885] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2131248 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2134093 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2134093 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2134093 ']' 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:48.100 10:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.100 [2024-07-12 10:59:04.478722] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:48.100 [2024-07-12 10:59:04.478780] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:48.100 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.100 [2024-07-12 10:59:04.560784] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.100 [2024-07-12 10:59:04.616140] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:48.100 [2024-07-12 10:59:04.616174] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:48.100 [2024-07-12 10:59:04.616180] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:48.100 [2024-07-12 10:59:04.616184] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:48.100 [2024-07-12 10:59:04.616188] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:48.100 [2024-07-12 10:59:04.616204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:48.360 10:59:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:48.360 10:59:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:48.360 10:59:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:48.360 10:59:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:48.360 10:59:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.360 10:59:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:48.360 10:59:05 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.3WcvWImN9P 00:20:48.360 10:59:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:48.360 10:59:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.3WcvWImN9P 00:20:48.360 10:59:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:20:48.360 10:59:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:48.360 10:59:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:20:48.360 10:59:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:48.360 10:59:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.3WcvWImN9P 00:20:48.360 10:59:05 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.3WcvWImN9P 00:20:48.360 10:59:05 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:48.620 [2024-07-12 10:59:05.417901] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:48.620 10:59:05 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:48.883 10:59:05 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:48.883 [2024-07-12 10:59:05.750713] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:48.883 [2024-07-12 10:59:05.750893] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:48.883 10:59:05 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:49.143 malloc0 00:20:49.143 10:59:05 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:49.143 10:59:06 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3WcvWImN9P 00:20:49.403 [2024-07-12 10:59:06.237574] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:49.403 [2024-07-12 10:59:06.237593] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:49.403 [2024-07-12 10:59:06.237612] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:49.403 request: 00:20:49.403 { 00:20:49.403 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.403 "host": "nqn.2016-06.io.spdk:host1", 00:20:49.403 "psk": "/tmp/tmp.3WcvWImN9P", 00:20:49.403 "method": "nvmf_subsystem_add_host", 00:20:49.403 "req_id": 1 00:20:49.403 } 00:20:49.403 Got JSON-RPC error response 00:20:49.403 response: 00:20:49.403 { 00:20:49.403 "code": -32603, 00:20:49.403 "message": "Internal error" 00:20:49.403 } 00:20:49.403 10:59:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:49.403 10:59:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:49.403 10:59:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:49.403 10:59:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:49.403 10:59:06 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 2134093 00:20:49.403 10:59:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2134093 ']' 00:20:49.403 10:59:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2134093 00:20:49.403 10:59:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:49.403 10:59:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:49.403 10:59:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2134093 00:20:49.403 10:59:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:49.403 10:59:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:49.403 10:59:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2134093' 00:20:49.403 killing process with pid 2134093 00:20:49.403 10:59:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2134093 00:20:49.403 10:59:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2134093 00:20:49.664 10:59:06 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.3WcvWImN9P 00:20:49.664 10:59:06 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:49.664 10:59:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:49.664 10:59:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:49.664 10:59:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.664 10:59:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2134600 00:20:49.664 10:59:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2134600 00:20:49.664 10:59:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:49.664 10:59:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2134600 ']' 00:20:49.664 10:59:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.664 10:59:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:49.664 10:59:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.664 10:59:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:49.664 10:59:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.664 [2024-07-12 10:59:06.504895] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:49.664 [2024-07-12 10:59:06.504949] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.664 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.664 [2024-07-12 10:59:06.586711] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.664 [2024-07-12 10:59:06.646499] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:49.664 [2024-07-12 10:59:06.646535] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:49.664 [2024-07-12 10:59:06.646541] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:49.664 [2024-07-12 10:59:06.646545] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:49.665 [2024-07-12 10:59:06.646550] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:49.665 [2024-07-12 10:59:06.646572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:50.607 10:59:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:50.607 10:59:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:50.607 10:59:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:50.607 10:59:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:50.607 10:59:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:50.607 10:59:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:50.607 10:59:07 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.3WcvWImN9P 00:20:50.607 10:59:07 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.3WcvWImN9P 00:20:50.607 10:59:07 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:50.607 [2024-07-12 10:59:07.465787] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:50.607 10:59:07 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:50.867 10:59:07 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:50.867 [2024-07-12 10:59:07.798599] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:50.867 [2024-07-12 10:59:07.798771] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:50.867 10:59:07 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:51.129 malloc0 00:20:51.129 10:59:07 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:51.389 10:59:08 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3WcvWImN9P 00:20:51.389 [2024-07-12 10:59:08.309617] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:51.389 10:59:08 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2135024 00:20:51.389 10:59:08 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:51.389 10:59:08 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:51.389 10:59:08 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2135024 /var/tmp/bdevperf.sock 00:20:51.389 10:59:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2135024 ']' 00:20:51.389 10:59:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:51.389 10:59:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:51.389 10:59:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:51.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:51.389 10:59:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:51.389 10:59:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.650 [2024-07-12 10:59:08.389025] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:51.650 [2024-07-12 10:59:08.389078] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2135024 ] 00:20:51.650 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.650 [2024-07-12 10:59:08.465478] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.650 [2024-07-12 10:59:08.528261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:52.222 10:59:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:52.222 10:59:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:52.222 10:59:09 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3WcvWImN9P 00:20:52.483 [2024-07-12 10:59:09.297857] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:52.483 [2024-07-12 10:59:09.297927] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:52.483 TLSTESTn1 00:20:52.483 10:59:09 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:52.743 10:59:09 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:20:52.743 "subsystems": [ 00:20:52.743 { 00:20:52.743 "subsystem": "keyring", 00:20:52.743 "config": [] 00:20:52.743 }, 00:20:52.743 { 00:20:52.743 "subsystem": "iobuf", 00:20:52.743 "config": [ 00:20:52.743 { 00:20:52.743 "method": "iobuf_set_options", 00:20:52.743 "params": { 00:20:52.743 "small_pool_count": 8192, 00:20:52.743 "large_pool_count": 1024, 00:20:52.743 "small_bufsize": 8192, 00:20:52.743 "large_bufsize": 135168 00:20:52.743 } 00:20:52.743 } 00:20:52.743 ] 00:20:52.743 }, 00:20:52.743 { 00:20:52.743 "subsystem": "sock", 00:20:52.743 "config": [ 00:20:52.743 { 00:20:52.743 "method": "sock_set_default_impl", 00:20:52.743 "params": { 00:20:52.743 "impl_name": "posix" 00:20:52.743 } 00:20:52.743 }, 00:20:52.743 { 00:20:52.743 "method": "sock_impl_set_options", 00:20:52.743 "params": { 00:20:52.743 "impl_name": "ssl", 00:20:52.743 "recv_buf_size": 4096, 00:20:52.743 "send_buf_size": 4096, 00:20:52.744 "enable_recv_pipe": true, 00:20:52.744 "enable_quickack": false, 00:20:52.744 "enable_placement_id": 0, 00:20:52.744 "enable_zerocopy_send_server": true, 00:20:52.744 "enable_zerocopy_send_client": false, 00:20:52.744 "zerocopy_threshold": 0, 00:20:52.744 "tls_version": 0, 00:20:52.744 "enable_ktls": false 00:20:52.744 } 00:20:52.744 }, 00:20:52.744 { 00:20:52.744 "method": "sock_impl_set_options", 00:20:52.744 "params": { 00:20:52.744 "impl_name": "posix", 00:20:52.744 "recv_buf_size": 2097152, 00:20:52.744 "send_buf_size": 2097152, 00:20:52.744 "enable_recv_pipe": true, 00:20:52.744 "enable_quickack": false, 00:20:52.744 "enable_placement_id": 0, 00:20:52.744 "enable_zerocopy_send_server": true, 00:20:52.744 "enable_zerocopy_send_client": false, 00:20:52.744 "zerocopy_threshold": 0, 00:20:52.744 "tls_version": 0, 00:20:52.744 "enable_ktls": false 00:20:52.744 } 00:20:52.744 } 00:20:52.744 ] 00:20:52.744 }, 00:20:52.744 { 00:20:52.744 "subsystem": "vmd", 00:20:52.744 "config": [] 00:20:52.744 }, 00:20:52.744 { 00:20:52.744 "subsystem": "accel", 00:20:52.744 "config": [ 00:20:52.744 { 00:20:52.744 "method": "accel_set_options", 00:20:52.744 "params": { 00:20:52.744 "small_cache_size": 128, 00:20:52.744 "large_cache_size": 16, 00:20:52.744 "task_count": 2048, 00:20:52.744 "sequence_count": 2048, 00:20:52.744 "buf_count": 2048 00:20:52.744 } 00:20:52.744 } 00:20:52.744 ] 00:20:52.744 }, 00:20:52.744 { 00:20:52.744 "subsystem": "bdev", 00:20:52.744 "config": [ 00:20:52.744 { 00:20:52.744 "method": "bdev_set_options", 00:20:52.744 "params": { 00:20:52.744 "bdev_io_pool_size": 65535, 00:20:52.744 "bdev_io_cache_size": 256, 00:20:52.744 "bdev_auto_examine": true, 00:20:52.744 "iobuf_small_cache_size": 128, 00:20:52.744 "iobuf_large_cache_size": 16 00:20:52.744 } 00:20:52.744 }, 00:20:52.744 { 00:20:52.744 "method": "bdev_raid_set_options", 00:20:52.744 "params": { 00:20:52.744 "process_window_size_kb": 1024 00:20:52.744 } 00:20:52.744 }, 00:20:52.744 { 00:20:52.744 "method": "bdev_iscsi_set_options", 00:20:52.744 "params": { 00:20:52.744 "timeout_sec": 30 00:20:52.744 } 00:20:52.744 }, 00:20:52.744 { 00:20:52.744 "method": "bdev_nvme_set_options", 00:20:52.744 "params": { 00:20:52.744 "action_on_timeout": "none", 00:20:52.744 "timeout_us": 0, 00:20:52.744 "timeout_admin_us": 0, 00:20:52.744 "keep_alive_timeout_ms": 10000, 00:20:52.744 "arbitration_burst": 0, 00:20:52.744 "low_priority_weight": 0, 00:20:52.744 "medium_priority_weight": 0, 00:20:52.744 "high_priority_weight": 0, 00:20:52.744 "nvme_adminq_poll_period_us": 10000, 00:20:52.744 "nvme_ioq_poll_period_us": 0, 00:20:52.744 "io_queue_requests": 0, 00:20:52.744 "delay_cmd_submit": true, 00:20:52.744 "transport_retry_count": 4, 00:20:52.744 "bdev_retry_count": 3, 00:20:52.744 "transport_ack_timeout": 0, 00:20:52.744 "ctrlr_loss_timeout_sec": 0, 00:20:52.744 "reconnect_delay_sec": 0, 00:20:52.744 "fast_io_fail_timeout_sec": 0, 00:20:52.744 "disable_auto_failback": false, 00:20:52.744 "generate_uuids": false, 00:20:52.744 "transport_tos": 0, 00:20:52.744 "nvme_error_stat": false, 00:20:52.744 "rdma_srq_size": 0, 00:20:52.744 "io_path_stat": false, 00:20:52.744 "allow_accel_sequence": false, 00:20:52.744 "rdma_max_cq_size": 0, 00:20:52.744 "rdma_cm_event_timeout_ms": 0, 00:20:52.744 "dhchap_digests": [ 00:20:52.744 "sha256", 00:20:52.744 "sha384", 00:20:52.744 "sha512" 00:20:52.744 ], 00:20:52.744 "dhchap_dhgroups": [ 00:20:52.744 "null", 00:20:52.744 "ffdhe2048", 00:20:52.744 "ffdhe3072", 00:20:52.744 "ffdhe4096", 00:20:52.744 "ffdhe6144", 00:20:52.744 "ffdhe8192" 00:20:52.744 ] 00:20:52.744 } 00:20:52.744 }, 00:20:52.744 { 00:20:52.744 "method": "bdev_nvme_set_hotplug", 00:20:52.744 "params": { 00:20:52.744 "period_us": 100000, 00:20:52.744 "enable": false 00:20:52.744 } 00:20:52.744 }, 00:20:52.744 { 00:20:52.744 "method": "bdev_malloc_create", 00:20:52.744 "params": { 00:20:52.744 "name": "malloc0", 00:20:52.744 "num_blocks": 8192, 00:20:52.744 "block_size": 4096, 00:20:52.744 "physical_block_size": 4096, 00:20:52.744 "uuid": "2642717c-16a0-46c4-8e2a-1688f63b7d4c", 00:20:52.744 "optimal_io_boundary": 0 00:20:52.744 } 00:20:52.744 }, 00:20:52.744 { 00:20:52.744 "method": "bdev_wait_for_examine" 00:20:52.744 } 00:20:52.744 ] 00:20:52.744 }, 00:20:52.744 { 00:20:52.744 "subsystem": "nbd", 00:20:52.744 "config": [] 00:20:52.744 }, 00:20:52.744 { 00:20:52.744 "subsystem": "scheduler", 00:20:52.744 "config": [ 00:20:52.744 { 00:20:52.744 "method": "framework_set_scheduler", 00:20:52.744 "params": { 00:20:52.744 "name": "static" 00:20:52.744 } 00:20:52.744 } 00:20:52.744 ] 00:20:52.744 }, 00:20:52.744 { 00:20:52.744 "subsystem": "nvmf", 00:20:52.744 "config": [ 00:20:52.744 { 00:20:52.744 "method": "nvmf_set_config", 00:20:52.744 "params": { 00:20:52.744 "discovery_filter": "match_any", 00:20:52.744 "admin_cmd_passthru": { 00:20:52.744 "identify_ctrlr": false 00:20:52.744 } 00:20:52.744 } 00:20:52.744 }, 00:20:52.744 { 00:20:52.744 "method": "nvmf_set_max_subsystems", 00:20:52.744 "params": { 00:20:52.744 "max_subsystems": 1024 00:20:52.744 } 00:20:52.744 }, 00:20:52.744 { 00:20:52.744 "method": "nvmf_set_crdt", 00:20:52.744 "params": { 00:20:52.744 "crdt1": 0, 00:20:52.744 "crdt2": 0, 00:20:52.744 "crdt3": 0 00:20:52.744 } 00:20:52.744 }, 00:20:52.744 { 00:20:52.744 "method": "nvmf_create_transport", 00:20:52.744 "params": { 00:20:52.744 "trtype": "TCP", 00:20:52.744 "max_queue_depth": 128, 00:20:52.744 "max_io_qpairs_per_ctrlr": 127, 00:20:52.744 "in_capsule_data_size": 4096, 00:20:52.744 "max_io_size": 131072, 00:20:52.744 "io_unit_size": 131072, 00:20:52.744 "max_aq_depth": 128, 00:20:52.744 "num_shared_buffers": 511, 00:20:52.744 "buf_cache_size": 4294967295, 00:20:52.744 "dif_insert_or_strip": false, 00:20:52.744 "zcopy": false, 00:20:52.744 "c2h_success": false, 00:20:52.744 "sock_priority": 0, 00:20:52.744 "abort_timeout_sec": 1, 00:20:52.744 "ack_timeout": 0, 00:20:52.744 "data_wr_pool_size": 0 00:20:52.744 } 00:20:52.744 }, 00:20:52.744 { 00:20:52.744 "method": "nvmf_create_subsystem", 00:20:52.744 "params": { 00:20:52.744 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:52.744 "allow_any_host": false, 00:20:52.744 "serial_number": "SPDK00000000000001", 00:20:52.744 "model_number": "SPDK bdev Controller", 00:20:52.744 "max_namespaces": 10, 00:20:52.744 "min_cntlid": 1, 00:20:52.744 "max_cntlid": 65519, 00:20:52.744 "ana_reporting": false 00:20:52.744 } 00:20:52.744 }, 00:20:52.744 { 00:20:52.744 "method": "nvmf_subsystem_add_host", 00:20:52.744 "params": { 00:20:52.744 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:52.744 "host": "nqn.2016-06.io.spdk:host1", 00:20:52.744 "psk": "/tmp/tmp.3WcvWImN9P" 00:20:52.744 } 00:20:52.744 }, 00:20:52.744 { 00:20:52.744 "method": "nvmf_subsystem_add_ns", 00:20:52.744 "params": { 00:20:52.744 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:52.744 "namespace": { 00:20:52.744 "nsid": 1, 00:20:52.744 "bdev_name": "malloc0", 00:20:52.744 "nguid": "2642717C16A046C48E2A1688F63B7D4C", 00:20:52.744 "uuid": "2642717c-16a0-46c4-8e2a-1688f63b7d4c", 00:20:52.744 "no_auto_visible": false 00:20:52.744 } 00:20:52.744 } 00:20:52.744 }, 00:20:52.744 { 00:20:52.744 "method": "nvmf_subsystem_add_listener", 00:20:52.744 "params": { 00:20:52.744 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:52.744 "listen_address": { 00:20:52.744 "trtype": "TCP", 00:20:52.744 "adrfam": "IPv4", 00:20:52.744 "traddr": "10.0.0.2", 00:20:52.744 "trsvcid": "4420" 00:20:52.744 }, 00:20:52.744 "secure_channel": true 00:20:52.744 } 00:20:52.744 } 00:20:52.744 ] 00:20:52.744 } 00:20:52.744 ] 00:20:52.745 }' 00:20:52.745 10:59:09 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:53.006 10:59:09 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:20:53.006 "subsystems": [ 00:20:53.006 { 00:20:53.006 "subsystem": "keyring", 00:20:53.006 "config": [] 00:20:53.006 }, 00:20:53.006 { 00:20:53.006 "subsystem": "iobuf", 00:20:53.006 "config": [ 00:20:53.006 { 00:20:53.006 "method": "iobuf_set_options", 00:20:53.006 "params": { 00:20:53.006 "small_pool_count": 8192, 00:20:53.006 "large_pool_count": 1024, 00:20:53.006 "small_bufsize": 8192, 00:20:53.006 "large_bufsize": 135168 00:20:53.006 } 00:20:53.006 } 00:20:53.006 ] 00:20:53.006 }, 00:20:53.006 { 00:20:53.006 "subsystem": "sock", 00:20:53.006 "config": [ 00:20:53.006 { 00:20:53.006 "method": "sock_set_default_impl", 00:20:53.006 "params": { 00:20:53.006 "impl_name": "posix" 00:20:53.006 } 00:20:53.006 }, 00:20:53.006 { 00:20:53.006 "method": "sock_impl_set_options", 00:20:53.006 "params": { 00:20:53.006 "impl_name": "ssl", 00:20:53.006 "recv_buf_size": 4096, 00:20:53.006 "send_buf_size": 4096, 00:20:53.006 "enable_recv_pipe": true, 00:20:53.006 "enable_quickack": false, 00:20:53.006 "enable_placement_id": 0, 00:20:53.006 "enable_zerocopy_send_server": true, 00:20:53.006 "enable_zerocopy_send_client": false, 00:20:53.006 "zerocopy_threshold": 0, 00:20:53.006 "tls_version": 0, 00:20:53.006 "enable_ktls": false 00:20:53.006 } 00:20:53.006 }, 00:20:53.006 { 00:20:53.006 "method": "sock_impl_set_options", 00:20:53.006 "params": { 00:20:53.006 "impl_name": "posix", 00:20:53.006 "recv_buf_size": 2097152, 00:20:53.006 "send_buf_size": 2097152, 00:20:53.006 "enable_recv_pipe": true, 00:20:53.006 "enable_quickack": false, 00:20:53.006 "enable_placement_id": 0, 00:20:53.006 "enable_zerocopy_send_server": true, 00:20:53.006 "enable_zerocopy_send_client": false, 00:20:53.006 "zerocopy_threshold": 0, 00:20:53.006 "tls_version": 0, 00:20:53.006 "enable_ktls": false 00:20:53.006 } 00:20:53.006 } 00:20:53.006 ] 00:20:53.006 }, 00:20:53.006 { 00:20:53.006 "subsystem": "vmd", 00:20:53.006 "config": [] 00:20:53.006 }, 00:20:53.006 { 00:20:53.006 "subsystem": "accel", 00:20:53.006 "config": [ 00:20:53.006 { 00:20:53.006 "method": "accel_set_options", 00:20:53.006 "params": { 00:20:53.006 "small_cache_size": 128, 00:20:53.006 "large_cache_size": 16, 00:20:53.006 "task_count": 2048, 00:20:53.006 "sequence_count": 2048, 00:20:53.006 "buf_count": 2048 00:20:53.006 } 00:20:53.006 } 00:20:53.006 ] 00:20:53.006 }, 00:20:53.006 { 00:20:53.006 "subsystem": "bdev", 00:20:53.006 "config": [ 00:20:53.006 { 00:20:53.006 "method": "bdev_set_options", 00:20:53.006 "params": { 00:20:53.006 "bdev_io_pool_size": 65535, 00:20:53.006 "bdev_io_cache_size": 256, 00:20:53.006 "bdev_auto_examine": true, 00:20:53.006 "iobuf_small_cache_size": 128, 00:20:53.006 "iobuf_large_cache_size": 16 00:20:53.006 } 00:20:53.006 }, 00:20:53.006 { 00:20:53.006 "method": "bdev_raid_set_options", 00:20:53.006 "params": { 00:20:53.006 "process_window_size_kb": 1024 00:20:53.006 } 00:20:53.006 }, 00:20:53.006 { 00:20:53.006 "method": "bdev_iscsi_set_options", 00:20:53.006 "params": { 00:20:53.006 "timeout_sec": 30 00:20:53.006 } 00:20:53.006 }, 00:20:53.006 { 00:20:53.006 "method": "bdev_nvme_set_options", 00:20:53.006 "params": { 00:20:53.006 "action_on_timeout": "none", 00:20:53.006 "timeout_us": 0, 00:20:53.006 "timeout_admin_us": 0, 00:20:53.006 "keep_alive_timeout_ms": 10000, 00:20:53.006 "arbitration_burst": 0, 00:20:53.006 "low_priority_weight": 0, 00:20:53.006 "medium_priority_weight": 0, 00:20:53.006 "high_priority_weight": 0, 00:20:53.006 "nvme_adminq_poll_period_us": 10000, 00:20:53.006 "nvme_ioq_poll_period_us": 0, 00:20:53.007 "io_queue_requests": 512, 00:20:53.007 "delay_cmd_submit": true, 00:20:53.007 "transport_retry_count": 4, 00:20:53.007 "bdev_retry_count": 3, 00:20:53.007 "transport_ack_timeout": 0, 00:20:53.007 "ctrlr_loss_timeout_sec": 0, 00:20:53.007 "reconnect_delay_sec": 0, 00:20:53.007 "fast_io_fail_timeout_sec": 0, 00:20:53.007 "disable_auto_failback": false, 00:20:53.007 "generate_uuids": false, 00:20:53.007 "transport_tos": 0, 00:20:53.007 "nvme_error_stat": false, 00:20:53.007 "rdma_srq_size": 0, 00:20:53.007 "io_path_stat": false, 00:20:53.007 "allow_accel_sequence": false, 00:20:53.007 "rdma_max_cq_size": 0, 00:20:53.007 "rdma_cm_event_timeout_ms": 0, 00:20:53.007 "dhchap_digests": [ 00:20:53.007 "sha256", 00:20:53.007 "sha384", 00:20:53.007 "sha512" 00:20:53.007 ], 00:20:53.007 "dhchap_dhgroups": [ 00:20:53.007 "null", 00:20:53.007 "ffdhe2048", 00:20:53.007 "ffdhe3072", 00:20:53.007 "ffdhe4096", 00:20:53.007 "ffdhe6144", 00:20:53.007 "ffdhe8192" 00:20:53.007 ] 00:20:53.007 } 00:20:53.007 }, 00:20:53.007 { 00:20:53.007 "method": "bdev_nvme_attach_controller", 00:20:53.007 "params": { 00:20:53.007 "name": "TLSTEST", 00:20:53.007 "trtype": "TCP", 00:20:53.007 "adrfam": "IPv4", 00:20:53.007 "traddr": "10.0.0.2", 00:20:53.007 "trsvcid": "4420", 00:20:53.007 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.007 "prchk_reftag": false, 00:20:53.007 "prchk_guard": false, 00:20:53.007 "ctrlr_loss_timeout_sec": 0, 00:20:53.007 "reconnect_delay_sec": 0, 00:20:53.007 "fast_io_fail_timeout_sec": 0, 00:20:53.007 "psk": "/tmp/tmp.3WcvWImN9P", 00:20:53.007 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:53.007 "hdgst": false, 00:20:53.007 "ddgst": false 00:20:53.007 } 00:20:53.007 }, 00:20:53.007 { 00:20:53.007 "method": "bdev_nvme_set_hotplug", 00:20:53.007 "params": { 00:20:53.007 "period_us": 100000, 00:20:53.007 "enable": false 00:20:53.007 } 00:20:53.007 }, 00:20:53.007 { 00:20:53.007 "method": "bdev_wait_for_examine" 00:20:53.007 } 00:20:53.007 ] 00:20:53.007 }, 00:20:53.007 { 00:20:53.007 "subsystem": "nbd", 00:20:53.007 "config": [] 00:20:53.007 } 00:20:53.007 ] 00:20:53.007 }' 00:20:53.007 10:59:09 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 2135024 00:20:53.007 10:59:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2135024 ']' 00:20:53.007 10:59:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2135024 00:20:53.007 10:59:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:53.007 10:59:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:53.007 10:59:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2135024 00:20:53.007 10:59:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:53.007 10:59:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:53.007 10:59:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2135024' 00:20:53.007 killing process with pid 2135024 00:20:53.007 10:59:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2135024 00:20:53.007 Received shutdown signal, test time was about 10.000000 seconds 00:20:53.007 00:20:53.007 Latency(us) 00:20:53.007 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.007 =================================================================================================================== 00:20:53.007 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:53.007 [2024-07-12 10:59:09.941394] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:53.007 10:59:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2135024 00:20:53.270 10:59:10 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 2134600 00:20:53.270 10:59:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2134600 ']' 00:20:53.270 10:59:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2134600 00:20:53.270 10:59:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:53.270 10:59:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:53.270 10:59:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2134600 00:20:53.270 10:59:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:53.270 10:59:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:53.270 10:59:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2134600' 00:20:53.270 killing process with pid 2134600 00:20:53.270 10:59:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2134600 00:20:53.270 [2024-07-12 10:59:10.129225] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:53.270 10:59:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2134600 00:20:53.270 10:59:10 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:53.270 10:59:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:53.270 10:59:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:53.270 10:59:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.270 10:59:10 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:20:53.270 "subsystems": [ 00:20:53.270 { 00:20:53.270 "subsystem": "keyring", 00:20:53.270 "config": [] 00:20:53.270 }, 00:20:53.270 { 00:20:53.270 "subsystem": "iobuf", 00:20:53.270 "config": [ 00:20:53.270 { 00:20:53.270 "method": "iobuf_set_options", 00:20:53.270 "params": { 00:20:53.270 "small_pool_count": 8192, 00:20:53.271 "large_pool_count": 1024, 00:20:53.271 "small_bufsize": 8192, 00:20:53.271 "large_bufsize": 135168 00:20:53.271 } 00:20:53.271 } 00:20:53.271 ] 00:20:53.271 }, 00:20:53.271 { 00:20:53.271 "subsystem": "sock", 00:20:53.271 "config": [ 00:20:53.271 { 00:20:53.271 "method": "sock_set_default_impl", 00:20:53.271 "params": { 00:20:53.271 "impl_name": "posix" 00:20:53.271 } 00:20:53.271 }, 00:20:53.271 { 00:20:53.271 "method": "sock_impl_set_options", 00:20:53.271 "params": { 00:20:53.271 "impl_name": "ssl", 00:20:53.271 "recv_buf_size": 4096, 00:20:53.271 "send_buf_size": 4096, 00:20:53.271 "enable_recv_pipe": true, 00:20:53.271 "enable_quickack": false, 00:20:53.271 "enable_placement_id": 0, 00:20:53.271 "enable_zerocopy_send_server": true, 00:20:53.271 "enable_zerocopy_send_client": false, 00:20:53.271 "zerocopy_threshold": 0, 00:20:53.271 "tls_version": 0, 00:20:53.271 "enable_ktls": false 00:20:53.271 } 00:20:53.271 }, 00:20:53.271 { 00:20:53.271 "method": "sock_impl_set_options", 00:20:53.271 "params": { 00:20:53.271 "impl_name": "posix", 00:20:53.271 "recv_buf_size": 2097152, 00:20:53.271 "send_buf_size": 2097152, 00:20:53.271 "enable_recv_pipe": true, 00:20:53.271 "enable_quickack": false, 00:20:53.271 "enable_placement_id": 0, 00:20:53.271 "enable_zerocopy_send_server": true, 00:20:53.271 "enable_zerocopy_send_client": false, 00:20:53.271 "zerocopy_threshold": 0, 00:20:53.271 "tls_version": 0, 00:20:53.271 "enable_ktls": false 00:20:53.271 } 00:20:53.271 } 00:20:53.271 ] 00:20:53.271 }, 00:20:53.271 { 00:20:53.271 "subsystem": "vmd", 00:20:53.271 "config": [] 00:20:53.271 }, 00:20:53.271 { 00:20:53.271 "subsystem": "accel", 00:20:53.271 "config": [ 00:20:53.271 { 00:20:53.271 "method": "accel_set_options", 00:20:53.271 "params": { 00:20:53.271 "small_cache_size": 128, 00:20:53.271 "large_cache_size": 16, 00:20:53.271 "task_count": 2048, 00:20:53.271 "sequence_count": 2048, 00:20:53.271 "buf_count": 2048 00:20:53.271 } 00:20:53.271 } 00:20:53.271 ] 00:20:53.271 }, 00:20:53.271 { 00:20:53.271 "subsystem": "bdev", 00:20:53.271 "config": [ 00:20:53.271 { 00:20:53.271 "method": "bdev_set_options", 00:20:53.271 "params": { 00:20:53.271 "bdev_io_pool_size": 65535, 00:20:53.271 "bdev_io_cache_size": 256, 00:20:53.271 "bdev_auto_examine": true, 00:20:53.271 "iobuf_small_cache_size": 128, 00:20:53.271 "iobuf_large_cache_size": 16 00:20:53.271 } 00:20:53.271 }, 00:20:53.271 { 00:20:53.271 "method": "bdev_raid_set_options", 00:20:53.271 "params": { 00:20:53.271 "process_window_size_kb": 1024 00:20:53.271 } 00:20:53.271 }, 00:20:53.271 { 00:20:53.271 "method": "bdev_iscsi_set_options", 00:20:53.271 "params": { 00:20:53.271 "timeout_sec": 30 00:20:53.271 } 00:20:53.271 }, 00:20:53.271 { 00:20:53.271 "method": "bdev_nvme_set_options", 00:20:53.271 "params": { 00:20:53.271 "action_on_timeout": "none", 00:20:53.271 "timeout_us": 0, 00:20:53.271 "timeout_admin_us": 0, 00:20:53.271 "keep_alive_timeout_ms": 10000, 00:20:53.271 "arbitration_burst": 0, 00:20:53.271 "low_priority_weight": 0, 00:20:53.271 "medium_priority_weight": 0, 00:20:53.271 "high_priority_weight": 0, 00:20:53.271 "nvme_adminq_poll_period_us": 10000, 00:20:53.271 "nvme_ioq_poll_period_us": 0, 00:20:53.271 "io_queue_requests": 0, 00:20:53.271 "delay_cmd_submit": true, 00:20:53.271 "transport_retry_count": 4, 00:20:53.271 "bdev_retry_count": 3, 00:20:53.271 "transport_ack_timeout": 0, 00:20:53.271 "ctrlr_loss_timeout_sec": 0, 00:20:53.271 "reconnect_delay_sec": 0, 00:20:53.271 "fast_io_fail_timeout_sec": 0, 00:20:53.271 "disable_auto_failback": false, 00:20:53.271 "generate_uuids": false, 00:20:53.271 "transport_tos": 0, 00:20:53.271 "nvme_error_stat": false, 00:20:53.271 "rdma_srq_size": 0, 00:20:53.271 "io_path_stat": false, 00:20:53.271 "allow_accel_sequence": false, 00:20:53.271 "rdma_max_cq_size": 0, 00:20:53.271 "rdma_cm_event_timeout_ms": 0, 00:20:53.271 "dhchap_digests": [ 00:20:53.271 "sha256", 00:20:53.271 "sha384", 00:20:53.271 "sha512" 00:20:53.271 ], 00:20:53.271 "dhchap_dhgroups": [ 00:20:53.271 "null", 00:20:53.271 "ffdhe2048", 00:20:53.271 "ffdhe3072", 00:20:53.271 "ffdhe4096", 00:20:53.271 "ffdhe6144", 00:20:53.271 "ffdhe8192" 00:20:53.271 ] 00:20:53.271 } 00:20:53.271 }, 00:20:53.271 { 00:20:53.271 "method": "bdev_nvme_set_hotplug", 00:20:53.271 "params": { 00:20:53.271 "period_us": 100000, 00:20:53.271 "enable": false 00:20:53.271 } 00:20:53.271 }, 00:20:53.271 { 00:20:53.271 "method": "bdev_malloc_create", 00:20:53.271 "params": { 00:20:53.271 "name": "malloc0", 00:20:53.271 "num_blocks": 8192, 00:20:53.271 "block_size": 4096, 00:20:53.271 "physical_block_size": 4096, 00:20:53.271 "uuid": "2642717c-16a0-46c4-8e2a-1688f63b7d4c", 00:20:53.271 "optimal_io_boundary": 0 00:20:53.271 } 00:20:53.271 }, 00:20:53.271 { 00:20:53.271 "method": "bdev_wait_for_examine" 00:20:53.271 } 00:20:53.271 ] 00:20:53.271 }, 00:20:53.271 { 00:20:53.271 "subsystem": "nbd", 00:20:53.271 "config": [] 00:20:53.271 }, 00:20:53.271 { 00:20:53.271 "subsystem": "scheduler", 00:20:53.271 "config": [ 00:20:53.271 { 00:20:53.271 "method": "framework_set_scheduler", 00:20:53.271 "params": { 00:20:53.271 "name": "static" 00:20:53.271 } 00:20:53.271 } 00:20:53.271 ] 00:20:53.271 }, 00:20:53.271 { 00:20:53.271 "subsystem": "nvmf", 00:20:53.271 "config": [ 00:20:53.271 { 00:20:53.271 "method": "nvmf_set_config", 00:20:53.271 "params": { 00:20:53.271 "discovery_filter": "match_any", 00:20:53.271 "admin_cmd_passthru": { 00:20:53.271 "identify_ctrlr": false 00:20:53.271 } 00:20:53.271 } 00:20:53.271 }, 00:20:53.271 { 00:20:53.271 "method": "nvmf_set_max_subsystems", 00:20:53.271 "params": { 00:20:53.271 "max_subsystems": 1024 00:20:53.271 } 00:20:53.271 }, 00:20:53.271 { 00:20:53.271 "method": "nvmf_set_crdt", 00:20:53.271 "params": { 00:20:53.271 "crdt1": 0, 00:20:53.271 "crdt2": 0, 00:20:53.271 "crdt3": 0 00:20:53.271 } 00:20:53.271 }, 00:20:53.271 { 00:20:53.271 "method": "nvmf_create_transport", 00:20:53.271 "params": { 00:20:53.271 "trtype": "TCP", 00:20:53.271 "max_queue_depth": 128, 00:20:53.271 "max_io_qpairs_per_ctrlr": 127, 00:20:53.271 "in_capsule_data_size": 4096, 00:20:53.271 "max_io_size": 131072, 00:20:53.271 "io_unit_size": 131072, 00:20:53.271 "max_aq_depth": 128, 00:20:53.271 "num_shared_buffers": 511, 00:20:53.271 "buf_cache_size": 4294967295, 00:20:53.271 "dif_insert_or_strip": false, 00:20:53.271 "zcopy": false, 00:20:53.271 "c2h_success": false, 00:20:53.271 "sock_priority": 0, 00:20:53.271 "abort_timeout_sec": 1, 00:20:53.271 "ack_timeout": 0, 00:20:53.271 "data_wr_pool_size": 0 00:20:53.271 } 00:20:53.271 }, 00:20:53.271 { 00:20:53.271 "method": "nvmf_create_subsystem", 00:20:53.271 "params": { 00:20:53.271 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.271 "allow_any_host": false, 00:20:53.271 "serial_number": "SPDK00000000000001", 00:20:53.271 "model_number": "SPDK bdev Controller", 00:20:53.271 "max_namespaces": 10, 00:20:53.271 "min_cntlid": 1, 00:20:53.271 "max_cntlid": 65519, 00:20:53.271 "ana_reporting": false 00:20:53.271 } 00:20:53.271 }, 00:20:53.271 { 00:20:53.271 "method": "nvmf_subsystem_add_host", 00:20:53.271 "params": { 00:20:53.271 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.271 "host": "nqn.2016-06.io.spdk:host1", 00:20:53.271 "psk": "/tmp/tmp.3WcvWImN9P" 00:20:53.271 } 00:20:53.271 }, 00:20:53.271 { 00:20:53.271 "method": "nvmf_subsystem_add_ns", 00:20:53.271 "params": { 00:20:53.271 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.271 "namespace": { 00:20:53.271 "nsid": 1, 00:20:53.272 "bdev_name": "malloc0", 00:20:53.272 "nguid": "2642717C16A046C48E2A1688F63B7D4C", 00:20:53.272 "uuid": "2642717c-16a0-46c4-8e2a-1688f63b7d4c", 00:20:53.272 "no_auto_visible": false 00:20:53.272 } 00:20:53.272 } 00:20:53.272 }, 00:20:53.272 { 00:20:53.272 "method": "nvmf_subsystem_add_listener", 00:20:53.272 "params": { 00:20:53.272 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.272 "listen_address": { 00:20:53.272 "trtype": "TCP", 00:20:53.272 "adrfam": "IPv4", 00:20:53.272 "traddr": "10.0.0.2", 00:20:53.272 "trsvcid": "4420" 00:20:53.272 }, 00:20:53.272 "secure_channel": true 00:20:53.272 } 00:20:53.272 } 00:20:53.272 ] 00:20:53.272 } 00:20:53.272 ] 00:20:53.272 }' 00:20:53.533 10:59:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2135379 00:20:53.533 10:59:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2135379 00:20:53.533 10:59:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:53.533 10:59:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2135379 ']' 00:20:53.533 10:59:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.533 10:59:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:53.533 10:59:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.533 10:59:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:53.533 10:59:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.533 [2024-07-12 10:59:10.308362] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:53.533 [2024-07-12 10:59:10.308416] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.533 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.533 [2024-07-12 10:59:10.388459] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.533 [2024-07-12 10:59:10.442797] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:53.533 [2024-07-12 10:59:10.442828] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:53.533 [2024-07-12 10:59:10.442833] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:53.533 [2024-07-12 10:59:10.442837] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:53.533 [2024-07-12 10:59:10.442841] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:53.533 [2024-07-12 10:59:10.442888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.794 [2024-07-12 10:59:10.625892] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.794 [2024-07-12 10:59:10.641875] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:53.794 [2024-07-12 10:59:10.657921] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:53.794 [2024-07-12 10:59:10.676335] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:54.364 10:59:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:54.364 10:59:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:54.364 10:59:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:54.365 10:59:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:54.365 10:59:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.365 10:59:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.365 10:59:11 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2135432 00:20:54.365 10:59:11 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2135432 /var/tmp/bdevperf.sock 00:20:54.365 10:59:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2135432 ']' 00:20:54.365 10:59:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:54.365 10:59:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:54.365 10:59:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:54.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:54.365 10:59:11 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:54.365 10:59:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:54.365 10:59:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.365 10:59:11 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:20:54.365 "subsystems": [ 00:20:54.365 { 00:20:54.365 "subsystem": "keyring", 00:20:54.365 "config": [] 00:20:54.365 }, 00:20:54.365 { 00:20:54.365 "subsystem": "iobuf", 00:20:54.365 "config": [ 00:20:54.365 { 00:20:54.365 "method": "iobuf_set_options", 00:20:54.365 "params": { 00:20:54.365 "small_pool_count": 8192, 00:20:54.365 "large_pool_count": 1024, 00:20:54.365 "small_bufsize": 8192, 00:20:54.365 "large_bufsize": 135168 00:20:54.365 } 00:20:54.365 } 00:20:54.365 ] 00:20:54.365 }, 00:20:54.365 { 00:20:54.365 "subsystem": "sock", 00:20:54.365 "config": [ 00:20:54.365 { 00:20:54.365 "method": "sock_set_default_impl", 00:20:54.365 "params": { 00:20:54.365 "impl_name": "posix" 00:20:54.365 } 00:20:54.365 }, 00:20:54.365 { 00:20:54.365 "method": "sock_impl_set_options", 00:20:54.365 "params": { 00:20:54.365 "impl_name": "ssl", 00:20:54.365 "recv_buf_size": 4096, 00:20:54.365 "send_buf_size": 4096, 00:20:54.365 "enable_recv_pipe": true, 00:20:54.365 "enable_quickack": false, 00:20:54.365 "enable_placement_id": 0, 00:20:54.365 "enable_zerocopy_send_server": true, 00:20:54.365 "enable_zerocopy_send_client": false, 00:20:54.365 "zerocopy_threshold": 0, 00:20:54.365 "tls_version": 0, 00:20:54.365 "enable_ktls": false 00:20:54.365 } 00:20:54.365 }, 00:20:54.365 { 00:20:54.365 "method": "sock_impl_set_options", 00:20:54.365 "params": { 00:20:54.365 "impl_name": "posix", 00:20:54.365 "recv_buf_size": 2097152, 00:20:54.365 "send_buf_size": 2097152, 00:20:54.365 "enable_recv_pipe": true, 00:20:54.365 "enable_quickack": false, 00:20:54.365 "enable_placement_id": 0, 00:20:54.365 "enable_zerocopy_send_server": true, 00:20:54.365 "enable_zerocopy_send_client": false, 00:20:54.365 "zerocopy_threshold": 0, 00:20:54.365 "tls_version": 0, 00:20:54.365 "enable_ktls": false 00:20:54.365 } 00:20:54.365 } 00:20:54.365 ] 00:20:54.365 }, 00:20:54.365 { 00:20:54.365 "subsystem": "vmd", 00:20:54.365 "config": [] 00:20:54.365 }, 00:20:54.365 { 00:20:54.365 "subsystem": "accel", 00:20:54.365 "config": [ 00:20:54.365 { 00:20:54.365 "method": "accel_set_options", 00:20:54.365 "params": { 00:20:54.365 "small_cache_size": 128, 00:20:54.365 "large_cache_size": 16, 00:20:54.365 "task_count": 2048, 00:20:54.365 "sequence_count": 2048, 00:20:54.365 "buf_count": 2048 00:20:54.365 } 00:20:54.365 } 00:20:54.365 ] 00:20:54.365 }, 00:20:54.365 { 00:20:54.365 "subsystem": "bdev", 00:20:54.365 "config": [ 00:20:54.365 { 00:20:54.365 "method": "bdev_set_options", 00:20:54.365 "params": { 00:20:54.365 "bdev_io_pool_size": 65535, 00:20:54.365 "bdev_io_cache_size": 256, 00:20:54.365 "bdev_auto_examine": true, 00:20:54.365 "iobuf_small_cache_size": 128, 00:20:54.365 "iobuf_large_cache_size": 16 00:20:54.365 } 00:20:54.365 }, 00:20:54.365 { 00:20:54.365 "method": "bdev_raid_set_options", 00:20:54.365 "params": { 00:20:54.365 "process_window_size_kb": 1024 00:20:54.365 } 00:20:54.365 }, 00:20:54.365 { 00:20:54.365 "method": "bdev_iscsi_set_options", 00:20:54.365 "params": { 00:20:54.365 "timeout_sec": 30 00:20:54.365 } 00:20:54.365 }, 00:20:54.365 { 00:20:54.365 "method": "bdev_nvme_set_options", 00:20:54.365 "params": { 00:20:54.365 "action_on_timeout": "none", 00:20:54.365 "timeout_us": 0, 00:20:54.365 "timeout_admin_us": 0, 00:20:54.365 "keep_alive_timeout_ms": 10000, 00:20:54.365 "arbitration_burst": 0, 00:20:54.365 "low_priority_weight": 0, 00:20:54.365 "medium_priority_weight": 0, 00:20:54.365 "high_priority_weight": 0, 00:20:54.365 "nvme_adminq_poll_period_us": 10000, 00:20:54.365 "nvme_ioq_poll_period_us": 0, 00:20:54.365 "io_queue_requests": 512, 00:20:54.365 "delay_cmd_submit": true, 00:20:54.365 "transport_retry_count": 4, 00:20:54.366 "bdev_retry_count": 3, 00:20:54.366 "transport_ack_timeout": 0, 00:20:54.366 "ctrlr_loss_timeout_sec": 0, 00:20:54.366 "reconnect_delay_sec": 0, 00:20:54.366 "fast_io_fail_timeout_sec": 0, 00:20:54.366 "disable_auto_failback": false, 00:20:54.366 "generate_uuids": false, 00:20:54.366 "transport_tos": 0, 00:20:54.366 "nvme_error_stat": false, 00:20:54.366 "rdma_srq_size": 0, 00:20:54.366 "io_path_stat": false, 00:20:54.366 "allow_accel_sequence": false, 00:20:54.366 "rdma_max_cq_size": 0, 00:20:54.366 "rdma_cm_event_timeout_ms": 0, 00:20:54.366 "dhchap_digests": [ 00:20:54.366 "sha256", 00:20:54.366 "sha384", 00:20:54.366 "sha512" 00:20:54.366 ], 00:20:54.366 "dhchap_dhgroups": [ 00:20:54.366 "null", 00:20:54.366 "ffdhe2048", 00:20:54.366 "ffdhe3072", 00:20:54.366 "ffdhe4096", 00:20:54.366 "ffdhe6144", 00:20:54.366 "ffdhe8192" 00:20:54.366 ] 00:20:54.366 } 00:20:54.366 }, 00:20:54.366 { 00:20:54.366 "method": "bdev_nvme_attach_controller", 00:20:54.366 "params": { 00:20:54.366 "name": "TLSTEST", 00:20:54.366 "trtype": "TCP", 00:20:54.366 "adrfam": "IPv4", 00:20:54.366 "traddr": "10.0.0.2", 00:20:54.366 "trsvcid": "4420", 00:20:54.366 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.366 "prchk_reftag": false, 00:20:54.366 "prchk_guard": false, 00:20:54.366 "ctrlr_loss_timeout_sec": 0, 00:20:54.366 "reconnect_delay_sec": 0, 00:20:54.366 "fast_io_fail_timeout_sec": 0, 00:20:54.366 "psk": "/tmp/tmp.3WcvWImN9P", 00:20:54.366 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:54.366 "hdgst": false, 00:20:54.366 "ddgst": false 00:20:54.366 } 00:20:54.366 }, 00:20:54.366 { 00:20:54.366 "method": "bdev_nvme_set_hotplug", 00:20:54.366 "params": { 00:20:54.366 "period_us": 100000, 00:20:54.366 "enable": false 00:20:54.366 } 00:20:54.366 }, 00:20:54.366 { 00:20:54.366 "method": "bdev_wait_for_examine" 00:20:54.366 } 00:20:54.366 ] 00:20:54.366 }, 00:20:54.366 { 00:20:54.366 "subsystem": "nbd", 00:20:54.366 "config": [] 00:20:54.366 } 00:20:54.366 ] 00:20:54.366 }' 00:20:54.366 [2024-07-12 10:59:11.158379] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:54.366 [2024-07-12 10:59:11.158429] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2135432 ] 00:20:54.366 EAL: No free 2048 kB hugepages reported on node 1 00:20:54.366 [2024-07-12 10:59:11.234666] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.366 [2024-07-12 10:59:11.297731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:54.628 [2024-07-12 10:59:11.427485] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:54.628 [2024-07-12 10:59:11.427557] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:55.199 10:59:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:55.199 10:59:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:55.199 10:59:11 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:55.199 Running I/O for 10 seconds... 00:21:05.191 00:21:05.191 Latency(us) 00:21:05.191 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.191 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:05.191 Verification LBA range: start 0x0 length 0x2000 00:21:05.191 TLSTESTn1 : 10.06 4530.44 17.70 0.00 0.00 28160.07 7427.41 58545.49 00:21:05.191 =================================================================================================================== 00:21:05.191 Total : 4530.44 17.70 0.00 0.00 28160.07 7427.41 58545.49 00:21:05.191 0 00:21:05.191 10:59:22 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:05.191 10:59:22 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 2135432 00:21:05.191 10:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2135432 ']' 00:21:05.191 10:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2135432 00:21:05.191 10:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:05.191 10:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:05.191 10:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2135432 00:21:05.451 10:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:05.451 10:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:05.451 10:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2135432' 00:21:05.451 killing process with pid 2135432 00:21:05.451 10:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2135432 00:21:05.451 Received shutdown signal, test time was about 10.000000 seconds 00:21:05.451 00:21:05.451 Latency(us) 00:21:05.451 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.451 =================================================================================================================== 00:21:05.451 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:05.451 [2024-07-12 10:59:22.178765] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:05.451 10:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2135432 00:21:05.451 10:59:22 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 2135379 00:21:05.451 10:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2135379 ']' 00:21:05.452 10:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2135379 00:21:05.452 10:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:05.452 10:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:05.452 10:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2135379 00:21:05.452 10:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:05.452 10:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:05.452 10:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2135379' 00:21:05.452 killing process with pid 2135379 00:21:05.452 10:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2135379 00:21:05.452 [2024-07-12 10:59:22.347651] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:05.452 10:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2135379 00:21:05.713 10:59:22 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:05.713 10:59:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:05.713 10:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:05.714 10:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.714 10:59:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2137750 00:21:05.714 10:59:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2137750 00:21:05.714 10:59:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:05.714 10:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2137750 ']' 00:21:05.714 10:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.714 10:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:05.714 10:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.714 10:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:05.714 10:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.714 [2024-07-12 10:59:22.522251] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:05.714 [2024-07-12 10:59:22.522308] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:05.714 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.714 [2024-07-12 10:59:22.602948] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.714 [2024-07-12 10:59:22.693173] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:05.714 [2024-07-12 10:59:22.693233] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:05.714 [2024-07-12 10:59:22.693241] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:05.714 [2024-07-12 10:59:22.693247] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:05.714 [2024-07-12 10:59:22.693253] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:05.714 [2024-07-12 10:59:22.693280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.708 10:59:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:06.709 10:59:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:06.709 10:59:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:06.709 10:59:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:06.709 10:59:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.709 10:59:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:06.709 10:59:23 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.3WcvWImN9P 00:21:06.709 10:59:23 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.3WcvWImN9P 00:21:06.709 10:59:23 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:06.709 [2024-07-12 10:59:23.519727] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:06.709 10:59:23 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:06.969 10:59:23 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:06.969 [2024-07-12 10:59:23.856575] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:06.969 [2024-07-12 10:59:23.856895] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:06.969 10:59:23 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:07.230 malloc0 00:21:07.230 10:59:24 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:07.230 10:59:24 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3WcvWImN9P 00:21:07.491 [2024-07-12 10:59:24.328005] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:07.491 10:59:24 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:07.491 10:59:24 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2138116 00:21:07.491 10:59:24 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:07.491 10:59:24 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2138116 /var/tmp/bdevperf.sock 00:21:07.491 10:59:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2138116 ']' 00:21:07.491 10:59:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:07.491 10:59:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:07.491 10:59:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:07.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:07.491 10:59:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:07.491 10:59:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.491 [2024-07-12 10:59:24.380507] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:07.491 [2024-07-12 10:59:24.380571] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2138116 ] 00:21:07.491 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.491 [2024-07-12 10:59:24.460562] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.752 [2024-07-12 10:59:24.521524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.324 10:59:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:08.324 10:59:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:08.324 10:59:25 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3WcvWImN9P 00:21:08.584 10:59:25 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:08.584 [2024-07-12 10:59:25.460863] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:08.584 nvme0n1 00:21:08.585 10:59:25 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:08.845 Running I/O for 1 seconds... 00:21:09.788 00:21:09.788 Latency(us) 00:21:09.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.788 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:09.788 Verification LBA range: start 0x0 length 0x2000 00:21:09.788 nvme0n1 : 1.05 3195.49 12.48 0.00 0.00 39197.18 4587.52 48278.19 00:21:09.788 =================================================================================================================== 00:21:09.788 Total : 3195.49 12.48 0.00 0.00 39197.18 4587.52 48278.19 00:21:09.788 0 00:21:09.788 10:59:26 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 2138116 00:21:09.788 10:59:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2138116 ']' 00:21:09.788 10:59:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2138116 00:21:09.788 10:59:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:09.788 10:59:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:09.788 10:59:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2138116 00:21:09.788 10:59:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:09.788 10:59:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:09.788 10:59:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2138116' 00:21:09.788 killing process with pid 2138116 00:21:09.788 10:59:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2138116 00:21:09.788 Received shutdown signal, test time was about 1.000000 seconds 00:21:09.788 00:21:09.788 Latency(us) 00:21:09.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.788 =================================================================================================================== 00:21:09.788 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:09.788 10:59:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2138116 00:21:10.050 10:59:26 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 2137750 00:21:10.050 10:59:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2137750 ']' 00:21:10.050 10:59:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2137750 00:21:10.050 10:59:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:10.050 10:59:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:10.050 10:59:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2137750 00:21:10.050 10:59:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:10.050 10:59:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:10.050 10:59:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2137750' 00:21:10.050 killing process with pid 2137750 00:21:10.050 10:59:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2137750 00:21:10.050 [2024-07-12 10:59:26.933229] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:10.050 10:59:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2137750 00:21:10.313 10:59:27 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:21:10.313 10:59:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:10.313 10:59:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:10.313 10:59:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:10.313 10:59:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2138620 00:21:10.313 10:59:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2138620 00:21:10.313 10:59:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:10.313 10:59:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2138620 ']' 00:21:10.313 10:59:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.313 10:59:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:10.313 10:59:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.313 10:59:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:10.313 10:59:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:10.313 [2024-07-12 10:59:27.136037] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:10.313 [2024-07-12 10:59:27.136097] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.313 EAL: No free 2048 kB hugepages reported on node 1 00:21:10.313 [2024-07-12 10:59:27.219805] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.575 [2024-07-12 10:59:27.311689] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.575 [2024-07-12 10:59:27.311748] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.575 [2024-07-12 10:59:27.311756] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:10.575 [2024-07-12 10:59:27.311763] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:10.575 [2024-07-12 10:59:27.311769] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.575 [2024-07-12 10:59:27.311794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.151 10:59:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:11.151 10:59:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:11.151 10:59:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:11.151 10:59:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:11.151 10:59:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.151 10:59:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.151 10:59:27 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:21:11.151 10:59:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.151 10:59:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.151 [2024-07-12 10:59:27.971193] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:11.151 malloc0 00:21:11.151 [2024-07-12 10:59:28.001390] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:11.151 [2024-07-12 10:59:28.001703] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.151 10:59:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.151 10:59:28 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=2138821 00:21:11.151 10:59:28 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 2138821 /var/tmp/bdevperf.sock 00:21:11.151 10:59:28 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:11.151 10:59:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2138821 ']' 00:21:11.151 10:59:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:11.151 10:59:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:11.151 10:59:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:11.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:11.151 10:59:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:11.151 10:59:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.151 [2024-07-12 10:59:28.081882] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:11.151 [2024-07-12 10:59:28.081942] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2138821 ] 00:21:11.151 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.412 [2024-07-12 10:59:28.161492] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.412 [2024-07-12 10:59:28.221829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.983 10:59:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:11.983 10:59:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:11.983 10:59:28 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3WcvWImN9P 00:21:12.245 10:59:28 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:12.245 [2024-07-12 10:59:29.117076] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:12.245 nvme0n1 00:21:12.245 10:59:29 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:12.506 Running I/O for 1 seconds... 00:21:13.448 00:21:13.448 Latency(us) 00:21:13.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.448 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:13.448 Verification LBA range: start 0x0 length 0x2000 00:21:13.448 nvme0n1 : 1.02 4571.50 17.86 0.00 0.00 27715.00 4478.29 53957.97 00:21:13.448 =================================================================================================================== 00:21:13.448 Total : 4571.50 17.86 0.00 0.00 27715.00 4478.29 53957.97 00:21:13.448 0 00:21:13.448 10:59:30 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:21:13.448 10:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.448 10:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.709 10:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.709 10:59:30 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:21:13.709 "subsystems": [ 00:21:13.709 { 00:21:13.709 "subsystem": "keyring", 00:21:13.709 "config": [ 00:21:13.709 { 00:21:13.709 "method": "keyring_file_add_key", 00:21:13.709 "params": { 00:21:13.709 "name": "key0", 00:21:13.709 "path": "/tmp/tmp.3WcvWImN9P" 00:21:13.709 } 00:21:13.709 } 00:21:13.709 ] 00:21:13.709 }, 00:21:13.709 { 00:21:13.709 "subsystem": "iobuf", 00:21:13.709 "config": [ 00:21:13.709 { 00:21:13.709 "method": "iobuf_set_options", 00:21:13.709 "params": { 00:21:13.709 "small_pool_count": 8192, 00:21:13.709 "large_pool_count": 1024, 00:21:13.709 "small_bufsize": 8192, 00:21:13.709 "large_bufsize": 135168 00:21:13.709 } 00:21:13.709 } 00:21:13.709 ] 00:21:13.709 }, 00:21:13.709 { 00:21:13.709 "subsystem": "sock", 00:21:13.709 "config": [ 00:21:13.709 { 00:21:13.709 "method": "sock_set_default_impl", 00:21:13.709 "params": { 00:21:13.709 "impl_name": "posix" 00:21:13.709 } 00:21:13.709 }, 00:21:13.709 { 00:21:13.709 "method": "sock_impl_set_options", 00:21:13.709 "params": { 00:21:13.709 "impl_name": "ssl", 00:21:13.709 "recv_buf_size": 4096, 00:21:13.709 "send_buf_size": 4096, 00:21:13.709 "enable_recv_pipe": true, 00:21:13.709 "enable_quickack": false, 00:21:13.709 "enable_placement_id": 0, 00:21:13.709 "enable_zerocopy_send_server": true, 00:21:13.709 "enable_zerocopy_send_client": false, 00:21:13.709 "zerocopy_threshold": 0, 00:21:13.709 "tls_version": 0, 00:21:13.709 "enable_ktls": false 00:21:13.709 } 00:21:13.709 }, 00:21:13.709 { 00:21:13.709 "method": "sock_impl_set_options", 00:21:13.709 "params": { 00:21:13.709 "impl_name": "posix", 00:21:13.709 "recv_buf_size": 2097152, 00:21:13.709 "send_buf_size": 2097152, 00:21:13.709 "enable_recv_pipe": true, 00:21:13.709 "enable_quickack": false, 00:21:13.709 "enable_placement_id": 0, 00:21:13.709 "enable_zerocopy_send_server": true, 00:21:13.709 "enable_zerocopy_send_client": false, 00:21:13.709 "zerocopy_threshold": 0, 00:21:13.709 "tls_version": 0, 00:21:13.709 "enable_ktls": false 00:21:13.709 } 00:21:13.709 } 00:21:13.709 ] 00:21:13.709 }, 00:21:13.709 { 00:21:13.709 "subsystem": "vmd", 00:21:13.709 "config": [] 00:21:13.709 }, 00:21:13.709 { 00:21:13.709 "subsystem": "accel", 00:21:13.709 "config": [ 00:21:13.709 { 00:21:13.709 "method": "accel_set_options", 00:21:13.709 "params": { 00:21:13.709 "small_cache_size": 128, 00:21:13.709 "large_cache_size": 16, 00:21:13.709 "task_count": 2048, 00:21:13.709 "sequence_count": 2048, 00:21:13.709 "buf_count": 2048 00:21:13.709 } 00:21:13.709 } 00:21:13.709 ] 00:21:13.709 }, 00:21:13.709 { 00:21:13.709 "subsystem": "bdev", 00:21:13.709 "config": [ 00:21:13.709 { 00:21:13.709 "method": "bdev_set_options", 00:21:13.709 "params": { 00:21:13.709 "bdev_io_pool_size": 65535, 00:21:13.709 "bdev_io_cache_size": 256, 00:21:13.709 "bdev_auto_examine": true, 00:21:13.709 "iobuf_small_cache_size": 128, 00:21:13.709 "iobuf_large_cache_size": 16 00:21:13.709 } 00:21:13.709 }, 00:21:13.709 { 00:21:13.709 "method": "bdev_raid_set_options", 00:21:13.709 "params": { 00:21:13.709 "process_window_size_kb": 1024 00:21:13.709 } 00:21:13.709 }, 00:21:13.709 { 00:21:13.709 "method": "bdev_iscsi_set_options", 00:21:13.709 "params": { 00:21:13.709 "timeout_sec": 30 00:21:13.709 } 00:21:13.709 }, 00:21:13.709 { 00:21:13.709 "method": "bdev_nvme_set_options", 00:21:13.709 "params": { 00:21:13.709 "action_on_timeout": "none", 00:21:13.709 "timeout_us": 0, 00:21:13.709 "timeout_admin_us": 0, 00:21:13.709 "keep_alive_timeout_ms": 10000, 00:21:13.709 "arbitration_burst": 0, 00:21:13.709 "low_priority_weight": 0, 00:21:13.709 "medium_priority_weight": 0, 00:21:13.709 "high_priority_weight": 0, 00:21:13.709 "nvme_adminq_poll_period_us": 10000, 00:21:13.709 "nvme_ioq_poll_period_us": 0, 00:21:13.709 "io_queue_requests": 0, 00:21:13.709 "delay_cmd_submit": true, 00:21:13.709 "transport_retry_count": 4, 00:21:13.709 "bdev_retry_count": 3, 00:21:13.709 "transport_ack_timeout": 0, 00:21:13.709 "ctrlr_loss_timeout_sec": 0, 00:21:13.709 "reconnect_delay_sec": 0, 00:21:13.709 "fast_io_fail_timeout_sec": 0, 00:21:13.709 "disable_auto_failback": false, 00:21:13.709 "generate_uuids": false, 00:21:13.709 "transport_tos": 0, 00:21:13.709 "nvme_error_stat": false, 00:21:13.709 "rdma_srq_size": 0, 00:21:13.709 "io_path_stat": false, 00:21:13.709 "allow_accel_sequence": false, 00:21:13.709 "rdma_max_cq_size": 0, 00:21:13.709 "rdma_cm_event_timeout_ms": 0, 00:21:13.709 "dhchap_digests": [ 00:21:13.709 "sha256", 00:21:13.709 "sha384", 00:21:13.709 "sha512" 00:21:13.709 ], 00:21:13.709 "dhchap_dhgroups": [ 00:21:13.709 "null", 00:21:13.709 "ffdhe2048", 00:21:13.709 "ffdhe3072", 00:21:13.709 "ffdhe4096", 00:21:13.709 "ffdhe6144", 00:21:13.709 "ffdhe8192" 00:21:13.709 ] 00:21:13.709 } 00:21:13.709 }, 00:21:13.709 { 00:21:13.709 "method": "bdev_nvme_set_hotplug", 00:21:13.709 "params": { 00:21:13.709 "period_us": 100000, 00:21:13.709 "enable": false 00:21:13.709 } 00:21:13.709 }, 00:21:13.709 { 00:21:13.709 "method": "bdev_malloc_create", 00:21:13.709 "params": { 00:21:13.709 "name": "malloc0", 00:21:13.709 "num_blocks": 8192, 00:21:13.709 "block_size": 4096, 00:21:13.709 "physical_block_size": 4096, 00:21:13.709 "uuid": "8a6e618d-f70c-4836-b870-5014db4a0e91", 00:21:13.710 "optimal_io_boundary": 0 00:21:13.710 } 00:21:13.710 }, 00:21:13.710 { 00:21:13.710 "method": "bdev_wait_for_examine" 00:21:13.710 } 00:21:13.710 ] 00:21:13.710 }, 00:21:13.710 { 00:21:13.710 "subsystem": "nbd", 00:21:13.710 "config": [] 00:21:13.710 }, 00:21:13.710 { 00:21:13.710 "subsystem": "scheduler", 00:21:13.710 "config": [ 00:21:13.710 { 00:21:13.710 "method": "framework_set_scheduler", 00:21:13.710 "params": { 00:21:13.710 "name": "static" 00:21:13.710 } 00:21:13.710 } 00:21:13.710 ] 00:21:13.710 }, 00:21:13.710 { 00:21:13.710 "subsystem": "nvmf", 00:21:13.710 "config": [ 00:21:13.710 { 00:21:13.710 "method": "nvmf_set_config", 00:21:13.710 "params": { 00:21:13.710 "discovery_filter": "match_any", 00:21:13.710 "admin_cmd_passthru": { 00:21:13.710 "identify_ctrlr": false 00:21:13.710 } 00:21:13.710 } 00:21:13.710 }, 00:21:13.710 { 00:21:13.710 "method": "nvmf_set_max_subsystems", 00:21:13.710 "params": { 00:21:13.710 "max_subsystems": 1024 00:21:13.710 } 00:21:13.710 }, 00:21:13.710 { 00:21:13.710 "method": "nvmf_set_crdt", 00:21:13.710 "params": { 00:21:13.710 "crdt1": 0, 00:21:13.710 "crdt2": 0, 00:21:13.710 "crdt3": 0 00:21:13.710 } 00:21:13.710 }, 00:21:13.710 { 00:21:13.710 "method": "nvmf_create_transport", 00:21:13.710 "params": { 00:21:13.710 "trtype": "TCP", 00:21:13.710 "max_queue_depth": 128, 00:21:13.710 "max_io_qpairs_per_ctrlr": 127, 00:21:13.710 "in_capsule_data_size": 4096, 00:21:13.710 "max_io_size": 131072, 00:21:13.710 "io_unit_size": 131072, 00:21:13.710 "max_aq_depth": 128, 00:21:13.710 "num_shared_buffers": 511, 00:21:13.710 "buf_cache_size": 4294967295, 00:21:13.710 "dif_insert_or_strip": false, 00:21:13.710 "zcopy": false, 00:21:13.710 "c2h_success": false, 00:21:13.710 "sock_priority": 0, 00:21:13.710 "abort_timeout_sec": 1, 00:21:13.710 "ack_timeout": 0, 00:21:13.710 "data_wr_pool_size": 0 00:21:13.710 } 00:21:13.710 }, 00:21:13.710 { 00:21:13.710 "method": "nvmf_create_subsystem", 00:21:13.710 "params": { 00:21:13.710 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.710 "allow_any_host": false, 00:21:13.710 "serial_number": "00000000000000000000", 00:21:13.710 "model_number": "SPDK bdev Controller", 00:21:13.710 "max_namespaces": 32, 00:21:13.710 "min_cntlid": 1, 00:21:13.710 "max_cntlid": 65519, 00:21:13.710 "ana_reporting": false 00:21:13.710 } 00:21:13.710 }, 00:21:13.710 { 00:21:13.710 "method": "nvmf_subsystem_add_host", 00:21:13.710 "params": { 00:21:13.710 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.710 "host": "nqn.2016-06.io.spdk:host1", 00:21:13.710 "psk": "key0" 00:21:13.710 } 00:21:13.710 }, 00:21:13.710 { 00:21:13.710 "method": "nvmf_subsystem_add_ns", 00:21:13.710 "params": { 00:21:13.710 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.710 "namespace": { 00:21:13.710 "nsid": 1, 00:21:13.710 "bdev_name": "malloc0", 00:21:13.710 "nguid": "8A6E618DF70C4836B8705014DB4A0E91", 00:21:13.710 "uuid": "8a6e618d-f70c-4836-b870-5014db4a0e91", 00:21:13.710 "no_auto_visible": false 00:21:13.710 } 00:21:13.710 } 00:21:13.710 }, 00:21:13.710 { 00:21:13.710 "method": "nvmf_subsystem_add_listener", 00:21:13.710 "params": { 00:21:13.710 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.710 "listen_address": { 00:21:13.710 "trtype": "TCP", 00:21:13.710 "adrfam": "IPv4", 00:21:13.710 "traddr": "10.0.0.2", 00:21:13.710 "trsvcid": "4420" 00:21:13.710 }, 00:21:13.710 "secure_channel": true 00:21:13.710 } 00:21:13.710 } 00:21:13.710 ] 00:21:13.710 } 00:21:13.710 ] 00:21:13.710 }' 00:21:13.710 10:59:30 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:13.710 10:59:30 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:21:13.710 "subsystems": [ 00:21:13.710 { 00:21:13.710 "subsystem": "keyring", 00:21:13.710 "config": [ 00:21:13.710 { 00:21:13.710 "method": "keyring_file_add_key", 00:21:13.710 "params": { 00:21:13.710 "name": "key0", 00:21:13.710 "path": "/tmp/tmp.3WcvWImN9P" 00:21:13.710 } 00:21:13.710 } 00:21:13.710 ] 00:21:13.710 }, 00:21:13.710 { 00:21:13.710 "subsystem": "iobuf", 00:21:13.710 "config": [ 00:21:13.710 { 00:21:13.710 "method": "iobuf_set_options", 00:21:13.710 "params": { 00:21:13.710 "small_pool_count": 8192, 00:21:13.710 "large_pool_count": 1024, 00:21:13.710 "small_bufsize": 8192, 00:21:13.710 "large_bufsize": 135168 00:21:13.710 } 00:21:13.710 } 00:21:13.710 ] 00:21:13.710 }, 00:21:13.710 { 00:21:13.710 "subsystem": "sock", 00:21:13.710 "config": [ 00:21:13.710 { 00:21:13.710 "method": "sock_set_default_impl", 00:21:13.710 "params": { 00:21:13.710 "impl_name": "posix" 00:21:13.710 } 00:21:13.710 }, 00:21:13.710 { 00:21:13.710 "method": "sock_impl_set_options", 00:21:13.710 "params": { 00:21:13.710 "impl_name": "ssl", 00:21:13.710 "recv_buf_size": 4096, 00:21:13.710 "send_buf_size": 4096, 00:21:13.710 "enable_recv_pipe": true, 00:21:13.710 "enable_quickack": false, 00:21:13.710 "enable_placement_id": 0, 00:21:13.710 "enable_zerocopy_send_server": true, 00:21:13.710 "enable_zerocopy_send_client": false, 00:21:13.710 "zerocopy_threshold": 0, 00:21:13.710 "tls_version": 0, 00:21:13.710 "enable_ktls": false 00:21:13.710 } 00:21:13.710 }, 00:21:13.710 { 00:21:13.710 "method": "sock_impl_set_options", 00:21:13.710 "params": { 00:21:13.710 "impl_name": "posix", 00:21:13.710 "recv_buf_size": 2097152, 00:21:13.710 "send_buf_size": 2097152, 00:21:13.710 "enable_recv_pipe": true, 00:21:13.710 "enable_quickack": false, 00:21:13.710 "enable_placement_id": 0, 00:21:13.710 "enable_zerocopy_send_server": true, 00:21:13.710 "enable_zerocopy_send_client": false, 00:21:13.710 "zerocopy_threshold": 0, 00:21:13.710 "tls_version": 0, 00:21:13.710 "enable_ktls": false 00:21:13.710 } 00:21:13.710 } 00:21:13.710 ] 00:21:13.710 }, 00:21:13.710 { 00:21:13.710 "subsystem": "vmd", 00:21:13.710 "config": [] 00:21:13.710 }, 00:21:13.710 { 00:21:13.710 "subsystem": "accel", 00:21:13.710 "config": [ 00:21:13.710 { 00:21:13.710 "method": "accel_set_options", 00:21:13.710 "params": { 00:21:13.710 "small_cache_size": 128, 00:21:13.710 "large_cache_size": 16, 00:21:13.710 "task_count": 2048, 00:21:13.710 "sequence_count": 2048, 00:21:13.710 "buf_count": 2048 00:21:13.710 } 00:21:13.710 } 00:21:13.710 ] 00:21:13.710 }, 00:21:13.710 { 00:21:13.710 "subsystem": "bdev", 00:21:13.710 "config": [ 00:21:13.710 { 00:21:13.710 "method": "bdev_set_options", 00:21:13.710 "params": { 00:21:13.710 "bdev_io_pool_size": 65535, 00:21:13.710 "bdev_io_cache_size": 256, 00:21:13.710 "bdev_auto_examine": true, 00:21:13.710 "iobuf_small_cache_size": 128, 00:21:13.710 "iobuf_large_cache_size": 16 00:21:13.710 } 00:21:13.710 }, 00:21:13.710 { 00:21:13.710 "method": "bdev_raid_set_options", 00:21:13.710 "params": { 00:21:13.710 "process_window_size_kb": 1024 00:21:13.710 } 00:21:13.710 }, 00:21:13.710 { 00:21:13.710 "method": "bdev_iscsi_set_options", 00:21:13.710 "params": { 00:21:13.710 "timeout_sec": 30 00:21:13.710 } 00:21:13.710 }, 00:21:13.710 { 00:21:13.710 "method": "bdev_nvme_set_options", 00:21:13.710 "params": { 00:21:13.710 "action_on_timeout": "none", 00:21:13.710 "timeout_us": 0, 00:21:13.710 "timeout_admin_us": 0, 00:21:13.710 "keep_alive_timeout_ms": 10000, 00:21:13.710 "arbitration_burst": 0, 00:21:13.710 "low_priority_weight": 0, 00:21:13.710 "medium_priority_weight": 0, 00:21:13.710 "high_priority_weight": 0, 00:21:13.710 "nvme_adminq_poll_period_us": 10000, 00:21:13.710 "nvme_ioq_poll_period_us": 0, 00:21:13.710 "io_queue_requests": 512, 00:21:13.710 "delay_cmd_submit": true, 00:21:13.710 "transport_retry_count": 4, 00:21:13.710 "bdev_retry_count": 3, 00:21:13.710 "transport_ack_timeout": 0, 00:21:13.710 "ctrlr_loss_timeout_sec": 0, 00:21:13.710 "reconnect_delay_sec": 0, 00:21:13.710 "fast_io_fail_timeout_sec": 0, 00:21:13.710 "disable_auto_failback": false, 00:21:13.710 "generate_uuids": false, 00:21:13.710 "transport_tos": 0, 00:21:13.710 "nvme_error_stat": false, 00:21:13.710 "rdma_srq_size": 0, 00:21:13.710 "io_path_stat": false, 00:21:13.710 "allow_accel_sequence": false, 00:21:13.710 "rdma_max_cq_size": 0, 00:21:13.711 "rdma_cm_event_timeout_ms": 0, 00:21:13.711 "dhchap_digests": [ 00:21:13.711 "sha256", 00:21:13.711 "sha384", 00:21:13.711 "sha512" 00:21:13.711 ], 00:21:13.711 "dhchap_dhgroups": [ 00:21:13.711 "null", 00:21:13.711 "ffdhe2048", 00:21:13.711 "ffdhe3072", 00:21:13.711 "ffdhe4096", 00:21:13.711 "ffdhe6144", 00:21:13.711 "ffdhe8192" 00:21:13.711 ] 00:21:13.711 } 00:21:13.711 }, 00:21:13.711 { 00:21:13.711 "method": "bdev_nvme_attach_controller", 00:21:13.711 "params": { 00:21:13.711 "name": "nvme0", 00:21:13.711 "trtype": "TCP", 00:21:13.711 "adrfam": "IPv4", 00:21:13.711 "traddr": "10.0.0.2", 00:21:13.711 "trsvcid": "4420", 00:21:13.711 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.711 "prchk_reftag": false, 00:21:13.711 "prchk_guard": false, 00:21:13.711 "ctrlr_loss_timeout_sec": 0, 00:21:13.711 "reconnect_delay_sec": 0, 00:21:13.711 "fast_io_fail_timeout_sec": 0, 00:21:13.711 "psk": "key0", 00:21:13.711 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:13.711 "hdgst": false, 00:21:13.711 "ddgst": false 00:21:13.711 } 00:21:13.711 }, 00:21:13.711 { 00:21:13.711 "method": "bdev_nvme_set_hotplug", 00:21:13.711 "params": { 00:21:13.711 "period_us": 100000, 00:21:13.711 "enable": false 00:21:13.711 } 00:21:13.711 }, 00:21:13.711 { 00:21:13.711 "method": "bdev_enable_histogram", 00:21:13.711 "params": { 00:21:13.711 "name": "nvme0n1", 00:21:13.711 "enable": true 00:21:13.711 } 00:21:13.711 }, 00:21:13.711 { 00:21:13.711 "method": "bdev_wait_for_examine" 00:21:13.711 } 00:21:13.711 ] 00:21:13.711 }, 00:21:13.711 { 00:21:13.711 "subsystem": "nbd", 00:21:13.711 "config": [] 00:21:13.711 } 00:21:13.711 ] 00:21:13.711 }' 00:21:13.711 10:59:30 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 2138821 00:21:13.711 10:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2138821 ']' 00:21:13.711 10:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2138821 00:21:13.711 10:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:13.711 10:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:13.972 10:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2138821 00:21:13.972 10:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:13.972 10:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:13.972 10:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2138821' 00:21:13.972 killing process with pid 2138821 00:21:13.972 10:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2138821 00:21:13.972 Received shutdown signal, test time was about 1.000000 seconds 00:21:13.972 00:21:13.972 Latency(us) 00:21:13.972 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.972 =================================================================================================================== 00:21:13.972 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:13.972 10:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2138821 00:21:13.972 10:59:30 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 2138620 00:21:13.972 10:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2138620 ']' 00:21:13.972 10:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2138620 00:21:13.972 10:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:13.972 10:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:13.972 10:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2138620 00:21:13.972 10:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:13.972 10:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:13.972 10:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2138620' 00:21:13.972 killing process with pid 2138620 00:21:13.972 10:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2138620 00:21:13.972 10:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2138620 00:21:14.234 10:59:31 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:21:14.234 10:59:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:14.234 10:59:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:14.234 10:59:31 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:21:14.234 "subsystems": [ 00:21:14.234 { 00:21:14.234 "subsystem": "keyring", 00:21:14.234 "config": [ 00:21:14.234 { 00:21:14.234 "method": "keyring_file_add_key", 00:21:14.234 "params": { 00:21:14.234 "name": "key0", 00:21:14.234 "path": "/tmp/tmp.3WcvWImN9P" 00:21:14.234 } 00:21:14.234 } 00:21:14.234 ] 00:21:14.234 }, 00:21:14.234 { 00:21:14.234 "subsystem": "iobuf", 00:21:14.234 "config": [ 00:21:14.234 { 00:21:14.234 "method": "iobuf_set_options", 00:21:14.234 "params": { 00:21:14.234 "small_pool_count": 8192, 00:21:14.234 "large_pool_count": 1024, 00:21:14.234 "small_bufsize": 8192, 00:21:14.234 "large_bufsize": 135168 00:21:14.234 } 00:21:14.234 } 00:21:14.234 ] 00:21:14.234 }, 00:21:14.234 { 00:21:14.234 "subsystem": "sock", 00:21:14.234 "config": [ 00:21:14.234 { 00:21:14.234 "method": "sock_set_default_impl", 00:21:14.234 "params": { 00:21:14.234 "impl_name": "posix" 00:21:14.234 } 00:21:14.234 }, 00:21:14.234 { 00:21:14.234 "method": "sock_impl_set_options", 00:21:14.234 "params": { 00:21:14.234 "impl_name": "ssl", 00:21:14.234 "recv_buf_size": 4096, 00:21:14.234 "send_buf_size": 4096, 00:21:14.234 "enable_recv_pipe": true, 00:21:14.234 "enable_quickack": false, 00:21:14.234 "enable_placement_id": 0, 00:21:14.234 "enable_zerocopy_send_server": true, 00:21:14.234 "enable_zerocopy_send_client": false, 00:21:14.234 "zerocopy_threshold": 0, 00:21:14.234 "tls_version": 0, 00:21:14.234 "enable_ktls": false 00:21:14.234 } 00:21:14.234 }, 00:21:14.234 { 00:21:14.234 "method": "sock_impl_set_options", 00:21:14.234 "params": { 00:21:14.234 "impl_name": "posix", 00:21:14.234 "recv_buf_size": 2097152, 00:21:14.234 "send_buf_size": 2097152, 00:21:14.234 "enable_recv_pipe": true, 00:21:14.234 "enable_quickack": false, 00:21:14.234 "enable_placement_id": 0, 00:21:14.234 "enable_zerocopy_send_server": true, 00:21:14.234 "enable_zerocopy_send_client": false, 00:21:14.234 "zerocopy_threshold": 0, 00:21:14.234 "tls_version": 0, 00:21:14.234 "enable_ktls": false 00:21:14.234 } 00:21:14.234 } 00:21:14.234 ] 00:21:14.234 }, 00:21:14.234 { 00:21:14.234 "subsystem": "vmd", 00:21:14.234 "config": [] 00:21:14.234 }, 00:21:14.234 { 00:21:14.234 "subsystem": "accel", 00:21:14.234 "config": [ 00:21:14.234 { 00:21:14.234 "method": "accel_set_options", 00:21:14.234 "params": { 00:21:14.234 "small_cache_size": 128, 00:21:14.234 "large_cache_size": 16, 00:21:14.234 "task_count": 2048, 00:21:14.234 "sequence_count": 2048, 00:21:14.234 "buf_count": 2048 00:21:14.234 } 00:21:14.234 } 00:21:14.234 ] 00:21:14.234 }, 00:21:14.234 { 00:21:14.234 "subsystem": "bdev", 00:21:14.234 "config": [ 00:21:14.234 { 00:21:14.234 "method": "bdev_set_options", 00:21:14.234 "params": { 00:21:14.234 "bdev_io_pool_size": 65535, 00:21:14.234 "bdev_io_cache_size": 256, 00:21:14.234 "bdev_auto_examine": true, 00:21:14.234 "iobuf_small_cache_size": 128, 00:21:14.234 "iobuf_large_cache_size": 16 00:21:14.234 } 00:21:14.234 }, 00:21:14.234 { 00:21:14.234 "method": "bdev_raid_set_options", 00:21:14.234 "params": { 00:21:14.234 "process_window_size_kb": 1024 00:21:14.234 } 00:21:14.234 }, 00:21:14.234 { 00:21:14.234 "method": "bdev_iscsi_set_options", 00:21:14.234 "params": { 00:21:14.234 "timeout_sec": 30 00:21:14.234 } 00:21:14.234 }, 00:21:14.234 { 00:21:14.234 "method": "bdev_nvme_set_options", 00:21:14.234 "params": { 00:21:14.234 "action_on_timeout": "none", 00:21:14.234 "timeout_us": 0, 00:21:14.234 "timeout_admin_us": 0, 00:21:14.234 "keep_alive_timeout_ms": 10000, 00:21:14.234 "arbitration_burst": 0, 00:21:14.234 "low_priority_weight": 0, 00:21:14.234 "medium_priority_weight": 0, 00:21:14.234 "high_priority_weight": 0, 00:21:14.234 "nvme_adminq_poll_period_us": 10000, 00:21:14.234 "nvme_ioq_poll_period_us": 0, 00:21:14.234 "io_queue_requests": 0, 00:21:14.234 "delay_cmd_submit": true, 00:21:14.234 "transport_retry_count": 4, 00:21:14.234 "bdev_retry_count": 3, 00:21:14.234 "transport_ack_timeout": 0, 00:21:14.234 "ctrlr_loss_timeout_sec": 0, 00:21:14.234 "reconnect_delay_sec": 0, 00:21:14.234 "fast_io_fail_timeout_sec": 0, 00:21:14.234 "disable_auto_failback": false, 00:21:14.234 "generate_uuids": false, 00:21:14.234 "transport_tos": 0, 00:21:14.234 "nvme_error_stat": false, 00:21:14.234 "rdma_srq_size": 0, 00:21:14.234 "io_path_stat": false, 00:21:14.234 "allow_accel_sequence": false, 00:21:14.234 "rdma_max_cq_size": 0, 00:21:14.234 "rdma_cm_event_timeout_ms": 0, 00:21:14.234 "dhchap_digests": [ 00:21:14.234 "sha256", 00:21:14.234 "sha384", 00:21:14.234 "sha512" 00:21:14.234 ], 00:21:14.234 "dhchap_dhgroups": [ 00:21:14.234 "null", 00:21:14.234 "ffdhe2048", 00:21:14.234 "ffdhe3072", 00:21:14.234 "ffdhe4096", 00:21:14.234 "ffdhe6144", 00:21:14.234 "ffdhe8192" 00:21:14.234 ] 00:21:14.234 } 00:21:14.234 }, 00:21:14.234 { 00:21:14.234 "method": "bdev_nvme_set_hotplug", 00:21:14.234 "params": { 00:21:14.234 "period_us": 100000, 00:21:14.234 "enable": false 00:21:14.234 } 00:21:14.234 }, 00:21:14.234 { 00:21:14.234 "method": "bdev_malloc_create", 00:21:14.234 "params": { 00:21:14.234 "name": "malloc0", 00:21:14.234 "num_blocks": 8192, 00:21:14.234 "block_size": 4096, 00:21:14.234 "physical_block_size": 4096, 00:21:14.234 "uuid": "8a6e618d-f70c-4836-b870-5014db4a0e91", 00:21:14.234 "optimal_io_boundary": 0 00:21:14.234 } 00:21:14.234 }, 00:21:14.234 { 00:21:14.234 "method": "bdev_wait_for_examine" 00:21:14.234 } 00:21:14.234 ] 00:21:14.234 }, 00:21:14.234 { 00:21:14.234 "subsystem": "nbd", 00:21:14.234 "config": [] 00:21:14.234 }, 00:21:14.234 { 00:21:14.234 "subsystem": "scheduler", 00:21:14.234 "config": [ 00:21:14.234 { 00:21:14.234 "method": "framework_set_scheduler", 00:21:14.234 "params": { 00:21:14.234 "name": "static" 00:21:14.235 } 00:21:14.235 } 00:21:14.235 ] 00:21:14.235 }, 00:21:14.235 { 00:21:14.235 "subsystem": "nvmf", 00:21:14.235 "config": [ 00:21:14.235 { 00:21:14.235 "method": "nvmf_set_config", 00:21:14.235 "params": { 00:21:14.235 "discovery_filter": "match_any", 00:21:14.235 "admin_cmd_passthru": { 00:21:14.235 "identify_ctrlr": false 00:21:14.235 } 00:21:14.235 } 00:21:14.235 }, 00:21:14.235 { 00:21:14.235 "method": "nvmf_set_max_subsystems", 00:21:14.235 "params": { 00:21:14.235 "max_subsystems": 1024 00:21:14.235 } 00:21:14.235 }, 00:21:14.235 { 00:21:14.235 "method": "nvmf_set_crdt", 00:21:14.235 "params": { 00:21:14.235 "crdt1": 0, 00:21:14.235 "crdt2": 0, 00:21:14.235 "crdt3": 0 00:21:14.235 } 00:21:14.235 }, 00:21:14.235 { 00:21:14.235 "method": "nvmf_create_transport", 00:21:14.235 "params": { 00:21:14.235 "trtype": "TCP", 00:21:14.235 "max_queue_depth": 128, 00:21:14.235 "max_io_qpairs_per_ctrlr": 127, 00:21:14.235 "in_capsule_data_size": 4096, 00:21:14.235 "max_io_size": 131072, 00:21:14.235 "io_unit_size": 131072, 00:21:14.235 "max_aq_depth": 128, 00:21:14.235 "num_shared_buffers": 511, 00:21:14.235 "buf_cache_size": 4294967295, 00:21:14.235 "dif_insert_or_strip": false, 00:21:14.235 "zcopy": false, 00:21:14.235 "c2h_success": false, 00:21:14.235 "sock_priority": 0, 00:21:14.235 "abort_timeout_sec": 1, 00:21:14.235 "ack_timeout": 0, 00:21:14.235 "data_wr_pool_size": 0 00:21:14.235 } 00:21:14.235 }, 00:21:14.235 { 00:21:14.235 "method": "nvmf_create_subsystem", 00:21:14.235 "params": { 00:21:14.235 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.235 "allow_any_host": false, 00:21:14.235 "serial_number": "00000000000000000000", 00:21:14.235 "model_number": "SPDK bdev Controller", 00:21:14.235 "max_namespaces": 32, 00:21:14.235 "min_cntlid": 1, 00:21:14.235 "max_cntlid": 65519, 00:21:14.235 "ana_reporting": false 00:21:14.235 } 00:21:14.235 }, 00:21:14.235 { 00:21:14.235 "method": "nvmf_subsystem_add_host", 00:21:14.235 "params": { 00:21:14.235 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.235 "host": "nqn.2016-06.io.spdk:host1", 00:21:14.235 "psk": "key0" 00:21:14.235 } 00:21:14.235 }, 00:21:14.235 { 00:21:14.235 "method": "nvmf_subsystem_add_ns", 00:21:14.235 "params": { 00:21:14.235 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.235 "namespace": { 00:21:14.235 "nsid": 1, 00:21:14.235 "bdev_name": "malloc0", 00:21:14.235 "nguid": "8A6E618DF70C4836B8705014DB4A0E91", 00:21:14.235 "uuid": "8a6e618d-f70c-4836-b870-5014db4a0e91", 00:21:14.235 "no_auto_visible": false 00:21:14.235 } 00:21:14.235 } 00:21:14.235 }, 00:21:14.235 { 00:21:14.235 "method": "nvmf_subsystem_add_listener", 00:21:14.235 "params": { 00:21:14.235 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.235 "listen_address": { 00:21:14.235 "trtype": "TCP", 00:21:14.235 "adrfam": "IPv4", 00:21:14.235 "traddr": "10.0.0.2", 00:21:14.235 "trsvcid": "4420" 00:21:14.235 }, 00:21:14.235 "secure_channel": true 00:21:14.235 } 00:21:14.235 } 00:21:14.235 ] 00:21:14.235 } 00:21:14.235 ] 00:21:14.235 }' 00:21:14.235 10:59:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.235 10:59:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2139504 00:21:14.235 10:59:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2139504 00:21:14.235 10:59:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:14.235 10:59:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2139504 ']' 00:21:14.235 10:59:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.235 10:59:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:14.235 10:59:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.235 10:59:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:14.235 10:59:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.235 [2024-07-12 10:59:31.099992] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:14.235 [2024-07-12 10:59:31.100057] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.235 EAL: No free 2048 kB hugepages reported on node 1 00:21:14.235 [2024-07-12 10:59:31.179933] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.496 [2024-07-12 10:59:31.234052] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:14.496 [2024-07-12 10:59:31.234084] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:14.496 [2024-07-12 10:59:31.234089] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:14.496 [2024-07-12 10:59:31.234094] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:14.496 [2024-07-12 10:59:31.234098] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:14.496 [2024-07-12 10:59:31.234146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.496 [2024-07-12 10:59:31.426062] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:14.496 [2024-07-12 10:59:31.458091] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:14.496 [2024-07-12 10:59:31.476433] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:15.069 10:59:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:15.069 10:59:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:15.069 10:59:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:15.069 10:59:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:15.069 10:59:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.069 10:59:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:15.069 10:59:31 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=2139534 00:21:15.069 10:59:31 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 2139534 /var/tmp/bdevperf.sock 00:21:15.069 10:59:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2139534 ']' 00:21:15.069 10:59:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:15.069 10:59:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:15.069 10:59:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:15.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:15.069 10:59:31 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:15.069 10:59:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:15.069 10:59:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.069 10:59:31 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:21:15.069 "subsystems": [ 00:21:15.069 { 00:21:15.069 "subsystem": "keyring", 00:21:15.069 "config": [ 00:21:15.069 { 00:21:15.069 "method": "keyring_file_add_key", 00:21:15.069 "params": { 00:21:15.069 "name": "key0", 00:21:15.069 "path": "/tmp/tmp.3WcvWImN9P" 00:21:15.069 } 00:21:15.069 } 00:21:15.069 ] 00:21:15.069 }, 00:21:15.069 { 00:21:15.069 "subsystem": "iobuf", 00:21:15.069 "config": [ 00:21:15.069 { 00:21:15.069 "method": "iobuf_set_options", 00:21:15.069 "params": { 00:21:15.069 "small_pool_count": 8192, 00:21:15.069 "large_pool_count": 1024, 00:21:15.069 "small_bufsize": 8192, 00:21:15.069 "large_bufsize": 135168 00:21:15.069 } 00:21:15.069 } 00:21:15.069 ] 00:21:15.069 }, 00:21:15.069 { 00:21:15.069 "subsystem": "sock", 00:21:15.069 "config": [ 00:21:15.069 { 00:21:15.069 "method": "sock_set_default_impl", 00:21:15.069 "params": { 00:21:15.069 "impl_name": "posix" 00:21:15.069 } 00:21:15.069 }, 00:21:15.069 { 00:21:15.069 "method": "sock_impl_set_options", 00:21:15.069 "params": { 00:21:15.069 "impl_name": "ssl", 00:21:15.069 "recv_buf_size": 4096, 00:21:15.069 "send_buf_size": 4096, 00:21:15.069 "enable_recv_pipe": true, 00:21:15.069 "enable_quickack": false, 00:21:15.069 "enable_placement_id": 0, 00:21:15.069 "enable_zerocopy_send_server": true, 00:21:15.069 "enable_zerocopy_send_client": false, 00:21:15.069 "zerocopy_threshold": 0, 00:21:15.069 "tls_version": 0, 00:21:15.069 "enable_ktls": false 00:21:15.069 } 00:21:15.069 }, 00:21:15.069 { 00:21:15.069 "method": "sock_impl_set_options", 00:21:15.069 "params": { 00:21:15.069 "impl_name": "posix", 00:21:15.069 "recv_buf_size": 2097152, 00:21:15.069 "send_buf_size": 2097152, 00:21:15.069 "enable_recv_pipe": true, 00:21:15.069 "enable_quickack": false, 00:21:15.069 "enable_placement_id": 0, 00:21:15.069 "enable_zerocopy_send_server": true, 00:21:15.069 "enable_zerocopy_send_client": false, 00:21:15.069 "zerocopy_threshold": 0, 00:21:15.069 "tls_version": 0, 00:21:15.069 "enable_ktls": false 00:21:15.069 } 00:21:15.069 } 00:21:15.069 ] 00:21:15.069 }, 00:21:15.069 { 00:21:15.069 "subsystem": "vmd", 00:21:15.069 "config": [] 00:21:15.069 }, 00:21:15.069 { 00:21:15.069 "subsystem": "accel", 00:21:15.069 "config": [ 00:21:15.069 { 00:21:15.069 "method": "accel_set_options", 00:21:15.069 "params": { 00:21:15.070 "small_cache_size": 128, 00:21:15.070 "large_cache_size": 16, 00:21:15.070 "task_count": 2048, 00:21:15.070 "sequence_count": 2048, 00:21:15.070 "buf_count": 2048 00:21:15.070 } 00:21:15.070 } 00:21:15.070 ] 00:21:15.070 }, 00:21:15.070 { 00:21:15.070 "subsystem": "bdev", 00:21:15.070 "config": [ 00:21:15.070 { 00:21:15.070 "method": "bdev_set_options", 00:21:15.070 "params": { 00:21:15.070 "bdev_io_pool_size": 65535, 00:21:15.070 "bdev_io_cache_size": 256, 00:21:15.070 "bdev_auto_examine": true, 00:21:15.070 "iobuf_small_cache_size": 128, 00:21:15.070 "iobuf_large_cache_size": 16 00:21:15.070 } 00:21:15.070 }, 00:21:15.070 { 00:21:15.070 "method": "bdev_raid_set_options", 00:21:15.070 "params": { 00:21:15.070 "process_window_size_kb": 1024 00:21:15.070 } 00:21:15.070 }, 00:21:15.070 { 00:21:15.070 "method": "bdev_iscsi_set_options", 00:21:15.070 "params": { 00:21:15.070 "timeout_sec": 30 00:21:15.070 } 00:21:15.070 }, 00:21:15.070 { 00:21:15.070 "method": "bdev_nvme_set_options", 00:21:15.070 "params": { 00:21:15.070 "action_on_timeout": "none", 00:21:15.070 "timeout_us": 0, 00:21:15.070 "timeout_admin_us": 0, 00:21:15.070 "keep_alive_timeout_ms": 10000, 00:21:15.070 "arbitration_burst": 0, 00:21:15.070 "low_priority_weight": 0, 00:21:15.070 "medium_priority_weight": 0, 00:21:15.070 "high_priority_weight": 0, 00:21:15.070 "nvme_adminq_poll_period_us": 10000, 00:21:15.070 "nvme_ioq_poll_period_us": 0, 00:21:15.070 "io_queue_requests": 512, 00:21:15.070 "delay_cmd_submit": true, 00:21:15.070 "transport_retry_count": 4, 00:21:15.070 "bdev_retry_count": 3, 00:21:15.070 "transport_ack_timeout": 0, 00:21:15.070 "ctrlr_loss_timeout_sec": 0, 00:21:15.070 "reconnect_delay_sec": 0, 00:21:15.070 "fast_io_fail_timeout_sec": 0, 00:21:15.070 "disable_auto_failback": false, 00:21:15.070 "generate_uuids": false, 00:21:15.070 "transport_tos": 0, 00:21:15.070 "nvme_error_stat": false, 00:21:15.070 "rdma_srq_size": 0, 00:21:15.070 "io_path_stat": false, 00:21:15.070 "allow_accel_sequence": false, 00:21:15.070 "rdma_max_cq_size": 0, 00:21:15.070 "rdma_cm_event_timeout_ms": 0, 00:21:15.070 "dhchap_digests": [ 00:21:15.070 "sha256", 00:21:15.070 "sha384", 00:21:15.070 "sha512" 00:21:15.070 ], 00:21:15.070 "dhchap_dhgroups": [ 00:21:15.070 "null", 00:21:15.070 "ffdhe2048", 00:21:15.070 "ffdhe3072", 00:21:15.070 "ffdhe4096", 00:21:15.070 "ffdhe6144", 00:21:15.070 "ffdhe8192" 00:21:15.070 ] 00:21:15.070 } 00:21:15.070 }, 00:21:15.070 { 00:21:15.070 "method": "bdev_nvme_attach_controller", 00:21:15.070 "params": { 00:21:15.070 "name": "nvme0", 00:21:15.070 "trtype": "TCP", 00:21:15.070 "adrfam": "IPv4", 00:21:15.070 "traddr": "10.0.0.2", 00:21:15.070 "trsvcid": "4420", 00:21:15.070 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:15.070 "prchk_reftag": false, 00:21:15.070 "prchk_guard": false, 00:21:15.070 "ctrlr_loss_timeout_sec": 0, 00:21:15.070 "reconnect_delay_sec": 0, 00:21:15.070 "fast_io_fail_timeout_sec": 0, 00:21:15.070 "psk": "key0", 00:21:15.070 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:15.070 "hdgst": false, 00:21:15.070 "ddgst": false 00:21:15.070 } 00:21:15.070 }, 00:21:15.070 { 00:21:15.070 "method": "bdev_nvme_set_hotplug", 00:21:15.070 "params": { 00:21:15.070 "period_us": 100000, 00:21:15.070 "enable": false 00:21:15.070 } 00:21:15.070 }, 00:21:15.070 { 00:21:15.070 "method": "bdev_enable_histogram", 00:21:15.070 "params": { 00:21:15.070 "name": "nvme0n1", 00:21:15.070 "enable": true 00:21:15.070 } 00:21:15.070 }, 00:21:15.070 { 00:21:15.070 "method": "bdev_wait_for_examine" 00:21:15.070 } 00:21:15.070 ] 00:21:15.070 }, 00:21:15.070 { 00:21:15.070 "subsystem": "nbd", 00:21:15.070 "config": [] 00:21:15.070 } 00:21:15.070 ] 00:21:15.070 }' 00:21:15.070 [2024-07-12 10:59:31.939089] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:15.070 [2024-07-12 10:59:31.939155] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2139534 ] 00:21:15.070 EAL: No free 2048 kB hugepages reported on node 1 00:21:15.070 [2024-07-12 10:59:31.990734] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.070 [2024-07-12 10:59:32.043865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.331 [2024-07-12 10:59:32.177360] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:15.903 10:59:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:15.903 10:59:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:15.903 10:59:32 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:15.903 10:59:32 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:21:16.171 10:59:32 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.171 10:59:32 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:16.171 Running I/O for 1 seconds... 00:21:17.114 00:21:17.114 Latency(us) 00:21:17.114 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.114 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:17.114 Verification LBA range: start 0x0 length 0x2000 00:21:17.114 nvme0n1 : 1.02 4690.33 18.32 0.00 0.00 26987.81 7864.32 42598.40 00:21:17.114 =================================================================================================================== 00:21:17.114 Total : 4690.33 18.32 0.00 0.00 26987.81 7864.32 42598.40 00:21:17.114 0 00:21:17.114 10:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:21:17.114 10:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:21:17.114 10:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:17.114 10:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:21:17.114 10:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:21:17.114 10:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:17.114 10:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:17.114 10:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:17.114 10:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:17.114 10:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:17.114 10:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:17.114 nvmf_trace.0 00:21:17.375 10:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:21:17.375 10:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 2139534 00:21:17.375 10:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2139534 ']' 00:21:17.375 10:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2139534 00:21:17.375 10:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:17.375 10:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:17.375 10:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2139534 00:21:17.375 10:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:17.375 10:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:17.375 10:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2139534' 00:21:17.375 killing process with pid 2139534 00:21:17.375 10:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2139534 00:21:17.375 Received shutdown signal, test time was about 1.000000 seconds 00:21:17.375 00:21:17.375 Latency(us) 00:21:17.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.375 =================================================================================================================== 00:21:17.375 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:17.375 10:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2139534 00:21:17.375 10:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:17.375 10:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:17.375 10:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:17.375 10:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:17.375 10:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:17.375 10:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:17.375 10:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:17.375 rmmod nvme_tcp 00:21:17.375 rmmod nvme_fabrics 00:21:17.375 rmmod nvme_keyring 00:21:17.636 10:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:17.636 10:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:17.636 10:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:17.636 10:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2139504 ']' 00:21:17.636 10:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2139504 00:21:17.636 10:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2139504 ']' 00:21:17.636 10:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2139504 00:21:17.636 10:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:17.636 10:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:17.636 10:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2139504 00:21:17.636 10:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:17.636 10:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:17.636 10:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2139504' 00:21:17.636 killing process with pid 2139504 00:21:17.636 10:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2139504 00:21:17.636 10:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2139504 00:21:17.636 10:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:17.636 10:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:17.636 10:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:17.636 10:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:17.636 10:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:17.636 10:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.636 10:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:17.636 10:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.183 10:59:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:20.183 10:59:36 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.sJ34bzoBoy /tmp/tmp.MIUccjhF3I /tmp/tmp.3WcvWImN9P 00:21:20.183 00:21:20.183 real 1m24.744s 00:21:20.183 user 2m11.059s 00:21:20.183 sys 0m26.899s 00:21:20.183 10:59:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:20.183 10:59:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.183 ************************************ 00:21:20.183 END TEST nvmf_tls 00:21:20.183 ************************************ 00:21:20.183 10:59:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:20.183 10:59:36 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:20.183 10:59:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:20.183 10:59:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:20.183 10:59:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:20.183 ************************************ 00:21:20.183 START TEST nvmf_fips 00:21:20.183 ************************************ 00:21:20.183 10:59:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:20.183 * Looking for test storage... 00:21:20.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:20.183 10:59:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:20.183 10:59:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:20.183 10:59:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:20.183 10:59:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:20.183 10:59:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:20.183 10:59:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:20.183 10:59:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:20.183 10:59:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:20.183 10:59:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:20.183 10:59:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:20.183 10:59:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:21:20.184 10:59:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:21:20.184 Error setting digest 00:21:20.184 00021D16FB7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:20.184 00021D16FB7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:20.184 10:59:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:21:20.184 10:59:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:20.185 10:59:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:20.185 10:59:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:20.185 10:59:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:20.185 10:59:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:20.185 10:59:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:20.185 10:59:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:20.185 10:59:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:20.185 10:59:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:20.185 10:59:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:20.185 10:59:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:20.185 10:59:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.185 10:59:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:20.185 10:59:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:20.185 10:59:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:21:20.185 10:59:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:28.330 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:28.330 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:21:28.330 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:28.330 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:28.330 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:28.330 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:28.330 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:28.330 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:21:28.330 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:28.330 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:21:28.330 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:28.331 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:28.331 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:28.331 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:28.331 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:28.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:28.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.495 ms 00:21:28.331 00:21:28.331 --- 10.0.0.2 ping statistics --- 00:21:28.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.331 rtt min/avg/max/mdev = 0.495/0.495/0.495/0.000 ms 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:28.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:28.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.357 ms 00:21:28.331 00:21:28.331 --- 10.0.0.1 ping statistics --- 00:21:28.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.331 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2144233 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2144233 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2144233 ']' 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.331 10:59:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:28.332 10:59:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.332 10:59:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:28.332 10:59:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:28.332 [2024-07-12 10:59:44.457588] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:28.332 [2024-07-12 10:59:44.457657] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.332 EAL: No free 2048 kB hugepages reported on node 1 00:21:28.332 [2024-07-12 10:59:44.546814] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.332 [2024-07-12 10:59:44.640385] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:28.332 [2024-07-12 10:59:44.640442] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:28.332 [2024-07-12 10:59:44.640451] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:28.332 [2024-07-12 10:59:44.640457] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:28.332 [2024-07-12 10:59:44.640463] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:28.332 [2024-07-12 10:59:44.640489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.332 10:59:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:28.332 10:59:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:21:28.332 10:59:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:28.332 10:59:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:28.332 10:59:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:28.332 10:59:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.332 10:59:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:28.332 10:59:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:28.332 10:59:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:28.332 10:59:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:28.332 10:59:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:28.332 10:59:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:28.332 10:59:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:28.332 10:59:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:28.593 [2024-07-12 10:59:45.448662] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:28.593 [2024-07-12 10:59:45.464640] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:28.593 [2024-07-12 10:59:45.464878] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:28.593 [2024-07-12 10:59:45.494703] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:28.593 malloc0 00:21:28.593 10:59:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:28.593 10:59:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2144583 00:21:28.593 10:59:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2144583 /var/tmp/bdevperf.sock 00:21:28.593 10:59:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:28.593 10:59:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2144583 ']' 00:21:28.593 10:59:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:28.593 10:59:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:28.593 10:59:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:28.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:28.593 10:59:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:28.593 10:59:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:28.854 [2024-07-12 10:59:45.601601] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:28.854 [2024-07-12 10:59:45.601679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2144583 ] 00:21:28.854 EAL: No free 2048 kB hugepages reported on node 1 00:21:28.854 [2024-07-12 10:59:45.683213] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.854 [2024-07-12 10:59:45.774617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:29.426 10:59:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:29.426 10:59:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:21:29.427 10:59:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:29.688 [2024-07-12 10:59:46.533068] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:29.688 [2024-07-12 10:59:46.533187] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:29.688 TLSTESTn1 00:21:29.688 10:59:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:29.948 Running I/O for 10 seconds... 00:21:39.957 00:21:39.957 Latency(us) 00:21:39.957 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.957 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:39.957 Verification LBA range: start 0x0 length 0x2000 00:21:39.957 TLSTESTn1 : 10.02 6097.73 23.82 0.00 0.00 20954.62 8137.39 67720.53 00:21:39.957 =================================================================================================================== 00:21:39.957 Total : 6097.73 23.82 0.00 0.00 20954.62 8137.39 67720.53 00:21:39.957 0 00:21:39.957 10:59:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:39.957 10:59:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:39.957 10:59:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:21:39.957 10:59:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:21:39.957 10:59:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:39.957 10:59:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:39.957 10:59:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:39.957 10:59:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:39.957 10:59:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:39.957 10:59:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:39.957 nvmf_trace.0 00:21:39.957 10:59:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:21:39.957 10:59:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2144583 00:21:39.957 10:59:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2144583 ']' 00:21:39.957 10:59:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2144583 00:21:39.957 10:59:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:21:39.957 10:59:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:39.957 10:59:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2144583 00:21:40.218 10:59:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:40.218 10:59:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:40.218 10:59:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2144583' 00:21:40.218 killing process with pid 2144583 00:21:40.218 10:59:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2144583 00:21:40.218 Received shutdown signal, test time was about 10.000000 seconds 00:21:40.218 00:21:40.218 Latency(us) 00:21:40.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:40.218 =================================================================================================================== 00:21:40.218 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:40.218 [2024-07-12 10:59:56.949124] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:40.218 10:59:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2144583 00:21:40.218 10:59:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:40.218 10:59:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:40.218 10:59:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:21:40.218 10:59:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:40.218 10:59:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:21:40.218 10:59:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:40.218 10:59:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:40.218 rmmod nvme_tcp 00:21:40.218 rmmod nvme_fabrics 00:21:40.218 rmmod nvme_keyring 00:21:40.218 10:59:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:40.218 10:59:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:21:40.218 10:59:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:21:40.218 10:59:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2144233 ']' 00:21:40.218 10:59:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2144233 00:21:40.218 10:59:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2144233 ']' 00:21:40.218 10:59:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2144233 00:21:40.218 10:59:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:21:40.218 10:59:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:40.218 10:59:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2144233 00:21:40.218 10:59:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:40.218 10:59:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:40.218 10:59:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2144233' 00:21:40.218 killing process with pid 2144233 00:21:40.218 10:59:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2144233 00:21:40.218 [2024-07-12 10:59:57.176643] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:40.218 10:59:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2144233 00:21:40.479 10:59:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:40.479 10:59:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:40.479 10:59:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:40.479 10:59:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:40.479 10:59:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:40.479 10:59:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.479 10:59:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:40.479 10:59:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.496 10:59:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:42.497 10:59:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:42.497 00:21:42.497 real 0m22.670s 00:21:42.497 user 0m24.256s 00:21:42.497 sys 0m9.208s 00:21:42.497 10:59:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:42.497 10:59:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:42.497 ************************************ 00:21:42.497 END TEST nvmf_fips 00:21:42.497 ************************************ 00:21:42.497 10:59:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:42.497 10:59:59 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:21:42.497 10:59:59 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:21:42.497 10:59:59 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:21:42.497 10:59:59 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:21:42.497 10:59:59 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:21:42.497 10:59:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:50.641 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:50.641 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:50.641 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:50.641 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:21:50.641 11:00:06 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:50.641 11:00:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:50.641 11:00:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:50.641 11:00:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:50.641 ************************************ 00:21:50.641 START TEST nvmf_perf_adq 00:21:50.641 ************************************ 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:50.642 * Looking for test storage... 00:21:50.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:50.642 11:00:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:57.234 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:57.234 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:57.234 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:57.234 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:57.235 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:57.235 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:57.235 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:57.235 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:21:57.235 11:00:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:21:58.177 11:00:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:00.727 11:00:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:06.026 11:00:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:22:06.026 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:06.026 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:06.026 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:06.026 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:06.026 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:06.026 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.026 11:00:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:06.026 11:00:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.026 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:06.026 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:06.026 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:06.026 11:00:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.026 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:06.026 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:06.026 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:06.026 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:06.026 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:06.026 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:06.026 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:06.026 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:06.026 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:06.026 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:06.026 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:06.026 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:06.026 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:06.026 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:06.026 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:06.026 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:06.026 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:06.026 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:06.026 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:06.027 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:06.027 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:06.027 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:06.027 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:06.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:06.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.426 ms 00:22:06.027 00:22:06.027 --- 10.0.0.2 ping statistics --- 00:22:06.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.027 rtt min/avg/max/mdev = 0.426/0.426/0.426/0.000 ms 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:06.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:06.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:22:06.027 00:22:06.027 --- 10.0.0.1 ping statistics --- 00:22:06.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.027 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2156934 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2156934 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2156934 ']' 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:06.027 11:00:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.027 [2024-07-12 11:00:22.598175] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:06.027 [2024-07-12 11:00:22.598239] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:06.027 EAL: No free 2048 kB hugepages reported on node 1 00:22:06.027 [2024-07-12 11:00:22.685138] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:06.027 [2024-07-12 11:00:22.785311] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:06.027 [2024-07-12 11:00:22.785369] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:06.027 [2024-07-12 11:00:22.785378] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:06.027 [2024-07-12 11:00:22.785385] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:06.027 [2024-07-12 11:00:22.785394] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:06.027 [2024-07-12 11:00:22.785498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.027 [2024-07-12 11:00:22.785664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:06.027 [2024-07-12 11:00:22.785833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.027 [2024-07-12 11:00:22.785834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:06.605 11:00:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:06.606 11:00:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:06.606 11:00:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:06.606 11:00:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:06.606 11:00:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.606 11:00:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.606 11:00:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:22:06.606 11:00:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:06.606 11:00:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:06.606 11:00:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.606 11:00:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.606 11:00:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.606 11:00:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:06.606 11:00:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:06.606 11:00:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.606 11:00:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.606 11:00:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.606 11:00:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:06.606 11:00:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.606 11:00:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.867 11:00:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.867 11:00:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:06.867 11:00:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.867 11:00:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.867 [2024-07-12 11:00:23.606152] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:06.867 11:00:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.867 11:00:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:06.867 11:00:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.867 11:00:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.867 Malloc1 00:22:06.867 11:00:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.867 11:00:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:06.868 11:00:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.868 11:00:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.868 11:00:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.868 11:00:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:06.868 11:00:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.868 11:00:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.868 11:00:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.868 11:00:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:06.868 11:00:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.868 11:00:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.868 [2024-07-12 11:00:23.671838] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:06.868 11:00:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.868 11:00:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2157086 00:22:06.868 11:00:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:22:06.868 11:00:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:06.868 EAL: No free 2048 kB hugepages reported on node 1 00:22:08.781 11:00:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:22:08.781 11:00:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.781 11:00:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:08.781 11:00:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.781 11:00:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:22:08.781 "tick_rate": 2400000000, 00:22:08.781 "poll_groups": [ 00:22:08.781 { 00:22:08.781 "name": "nvmf_tgt_poll_group_000", 00:22:08.781 "admin_qpairs": 1, 00:22:08.781 "io_qpairs": 1, 00:22:08.781 "current_admin_qpairs": 1, 00:22:08.781 "current_io_qpairs": 1, 00:22:08.781 "pending_bdev_io": 0, 00:22:08.781 "completed_nvme_io": 16431, 00:22:08.781 "transports": [ 00:22:08.781 { 00:22:08.781 "trtype": "TCP" 00:22:08.781 } 00:22:08.781 ] 00:22:08.781 }, 00:22:08.781 { 00:22:08.781 "name": "nvmf_tgt_poll_group_001", 00:22:08.781 "admin_qpairs": 0, 00:22:08.781 "io_qpairs": 1, 00:22:08.781 "current_admin_qpairs": 0, 00:22:08.781 "current_io_qpairs": 1, 00:22:08.781 "pending_bdev_io": 0, 00:22:08.781 "completed_nvme_io": 19732, 00:22:08.781 "transports": [ 00:22:08.781 { 00:22:08.781 "trtype": "TCP" 00:22:08.781 } 00:22:08.781 ] 00:22:08.781 }, 00:22:08.781 { 00:22:08.781 "name": "nvmf_tgt_poll_group_002", 00:22:08.781 "admin_qpairs": 0, 00:22:08.781 "io_qpairs": 1, 00:22:08.781 "current_admin_qpairs": 0, 00:22:08.781 "current_io_qpairs": 1, 00:22:08.781 "pending_bdev_io": 0, 00:22:08.781 "completed_nvme_io": 19188, 00:22:08.781 "transports": [ 00:22:08.781 { 00:22:08.781 "trtype": "TCP" 00:22:08.781 } 00:22:08.781 ] 00:22:08.781 }, 00:22:08.781 { 00:22:08.781 "name": "nvmf_tgt_poll_group_003", 00:22:08.781 "admin_qpairs": 0, 00:22:08.781 "io_qpairs": 1, 00:22:08.781 "current_admin_qpairs": 0, 00:22:08.781 "current_io_qpairs": 1, 00:22:08.781 "pending_bdev_io": 0, 00:22:08.781 "completed_nvme_io": 16734, 00:22:08.781 "transports": [ 00:22:08.781 { 00:22:08.781 "trtype": "TCP" 00:22:08.781 } 00:22:08.781 ] 00:22:08.781 } 00:22:08.781 ] 00:22:08.781 }' 00:22:08.781 11:00:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:08.781 11:00:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:22:08.781 11:00:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:22:08.781 11:00:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:22:08.781 11:00:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2157086 00:22:16.918 Initializing NVMe Controllers 00:22:16.918 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:16.918 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:16.918 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:16.918 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:16.918 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:16.918 Initialization complete. Launching workers. 00:22:16.918 ======================================================== 00:22:16.918 Latency(us) 00:22:16.918 Device Information : IOPS MiB/s Average min max 00:22:16.918 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13202.70 51.57 4847.88 1313.50 11292.58 00:22:16.918 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14054.50 54.90 4553.07 1312.95 14060.32 00:22:16.918 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 12492.20 48.80 5122.96 1314.31 12806.66 00:22:16.918 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12489.50 48.79 5124.05 1231.25 14850.98 00:22:16.918 ======================================================== 00:22:16.918 Total : 52238.88 204.06 4900.37 1231.25 14850.98 00:22:16.918 00:22:16.918 11:00:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:22:16.918 11:00:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:16.918 11:00:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:16.918 11:00:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:16.918 11:00:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:16.918 11:00:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:16.918 11:00:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:16.918 rmmod nvme_tcp 00:22:16.918 rmmod nvme_fabrics 00:22:16.918 rmmod nvme_keyring 00:22:17.179 11:00:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:17.179 11:00:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:17.179 11:00:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:17.179 11:00:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2156934 ']' 00:22:17.179 11:00:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2156934 00:22:17.179 11:00:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2156934 ']' 00:22:17.179 11:00:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2156934 00:22:17.179 11:00:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:22:17.179 11:00:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:17.179 11:00:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2156934 00:22:17.179 11:00:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:17.179 11:00:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:17.179 11:00:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2156934' 00:22:17.179 killing process with pid 2156934 00:22:17.179 11:00:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2156934 00:22:17.179 11:00:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2156934 00:22:17.179 11:00:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:17.179 11:00:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:17.179 11:00:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:17.179 11:00:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:17.179 11:00:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:17.179 11:00:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.179 11:00:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:17.179 11:00:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.724 11:00:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:19.724 11:00:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:22:19.724 11:00:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:21.153 11:00:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:23.084 11:00:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:28.415 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:28.415 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:28.415 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:28.415 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:28.415 11:00:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:28.415 11:00:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:28.415 11:00:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:28.415 11:00:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:28.415 11:00:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:28.415 11:00:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:28.415 11:00:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:28.415 11:00:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:28.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:28.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:22:28.415 00:22:28.415 --- 10.0.0.2 ping statistics --- 00:22:28.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.415 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:22:28.415 11:00:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:28.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:28.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.371 ms 00:22:28.415 00:22:28.415 --- 10.0.0.1 ping statistics --- 00:22:28.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.415 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:22:28.415 11:00:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:28.415 11:00:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:28.415 11:00:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:28.415 11:00:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:28.415 11:00:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:28.415 11:00:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:28.416 11:00:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:28.416 11:00:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:28.416 11:00:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:28.416 11:00:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:22:28.416 11:00:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:28.416 11:00:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:28.416 11:00:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:28.416 net.core.busy_poll = 1 00:22:28.416 11:00:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:28.416 net.core.busy_read = 1 00:22:28.416 11:00:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:28.416 11:00:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:28.676 11:00:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:28.676 11:00:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:28.676 11:00:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:28.676 11:00:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:28.676 11:00:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:28.676 11:00:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:28.676 11:00:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.676 11:00:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2161731 00:22:28.676 11:00:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2161731 00:22:28.676 11:00:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:28.676 11:00:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2161731 ']' 00:22:28.676 11:00:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.676 11:00:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:28.676 11:00:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.676 11:00:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:28.676 11:00:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.676 [2024-07-12 11:00:45.591789] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:28.676 [2024-07-12 11:00:45.591859] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.676 EAL: No free 2048 kB hugepages reported on node 1 00:22:28.936 [2024-07-12 11:00:45.682313] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:28.936 [2024-07-12 11:00:45.779040] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:28.936 [2024-07-12 11:00:45.779101] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:28.936 [2024-07-12 11:00:45.779109] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:28.936 [2024-07-12 11:00:45.779116] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:28.936 [2024-07-12 11:00:45.779131] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:28.936 [2024-07-12 11:00:45.779221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:28.936 [2024-07-12 11:00:45.779468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:28.936 [2024-07-12 11:00:45.779695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:28.936 [2024-07-12 11:00:45.779698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.507 11:00:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:29.507 11:00:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:29.507 11:00:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:29.507 11:00:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:29.507 11:00:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:29.507 11:00:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:29.507 11:00:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:22:29.507 11:00:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:29.507 11:00:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:29.507 11:00:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.507 11:00:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:29.507 11:00:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.507 11:00:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:29.507 11:00:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:29.507 11:00:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.507 11:00:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:29.507 11:00:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.507 11:00:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:29.507 11:00:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.507 11:00:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:29.769 11:00:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.769 11:00:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:29.769 11:00:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.769 11:00:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:29.769 [2024-07-12 11:00:46.587615] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:29.769 11:00:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.769 11:00:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:29.769 11:00:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.769 11:00:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:29.769 Malloc1 00:22:29.769 11:00:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.769 11:00:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:29.769 11:00:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.769 11:00:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:29.769 11:00:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.769 11:00:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:29.769 11:00:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.769 11:00:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:29.769 11:00:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.769 11:00:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:29.769 11:00:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.769 11:00:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:29.769 [2024-07-12 11:00:46.653392] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:29.769 11:00:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.769 11:00:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2161913 00:22:29.769 11:00:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:22:29.769 11:00:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:29.769 EAL: No free 2048 kB hugepages reported on node 1 00:22:32.312 11:00:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:22:32.312 11:00:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.312 11:00:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:32.312 11:00:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.312 11:00:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:22:32.312 "tick_rate": 2400000000, 00:22:32.312 "poll_groups": [ 00:22:32.312 { 00:22:32.312 "name": "nvmf_tgt_poll_group_000", 00:22:32.312 "admin_qpairs": 1, 00:22:32.312 "io_qpairs": 1, 00:22:32.312 "current_admin_qpairs": 1, 00:22:32.312 "current_io_qpairs": 1, 00:22:32.312 "pending_bdev_io": 0, 00:22:32.312 "completed_nvme_io": 25156, 00:22:32.312 "transports": [ 00:22:32.312 { 00:22:32.312 "trtype": "TCP" 00:22:32.312 } 00:22:32.312 ] 00:22:32.312 }, 00:22:32.312 { 00:22:32.312 "name": "nvmf_tgt_poll_group_001", 00:22:32.312 "admin_qpairs": 0, 00:22:32.312 "io_qpairs": 3, 00:22:32.312 "current_admin_qpairs": 0, 00:22:32.313 "current_io_qpairs": 3, 00:22:32.313 "pending_bdev_io": 0, 00:22:32.313 "completed_nvme_io": 31442, 00:22:32.313 "transports": [ 00:22:32.313 { 00:22:32.313 "trtype": "TCP" 00:22:32.313 } 00:22:32.313 ] 00:22:32.313 }, 00:22:32.313 { 00:22:32.313 "name": "nvmf_tgt_poll_group_002", 00:22:32.313 "admin_qpairs": 0, 00:22:32.313 "io_qpairs": 0, 00:22:32.313 "current_admin_qpairs": 0, 00:22:32.313 "current_io_qpairs": 0, 00:22:32.313 "pending_bdev_io": 0, 00:22:32.313 "completed_nvme_io": 0, 00:22:32.313 "transports": [ 00:22:32.313 { 00:22:32.313 "trtype": "TCP" 00:22:32.313 } 00:22:32.313 ] 00:22:32.313 }, 00:22:32.313 { 00:22:32.313 "name": "nvmf_tgt_poll_group_003", 00:22:32.313 "admin_qpairs": 0, 00:22:32.313 "io_qpairs": 0, 00:22:32.313 "current_admin_qpairs": 0, 00:22:32.313 "current_io_qpairs": 0, 00:22:32.313 "pending_bdev_io": 0, 00:22:32.313 "completed_nvme_io": 0, 00:22:32.313 "transports": [ 00:22:32.313 { 00:22:32.313 "trtype": "TCP" 00:22:32.313 } 00:22:32.313 ] 00:22:32.313 } 00:22:32.313 ] 00:22:32.313 }' 00:22:32.313 11:00:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:32.313 11:00:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:22:32.313 11:00:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:22:32.313 11:00:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:22:32.313 11:00:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2161913 00:22:40.452 Initializing NVMe Controllers 00:22:40.452 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:40.452 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:40.452 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:40.452 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:40.452 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:40.452 Initialization complete. Launching workers. 00:22:40.452 ======================================================== 00:22:40.452 Latency(us) 00:22:40.452 Device Information : IOPS MiB/s Average min max 00:22:40.452 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6229.70 24.33 10275.01 1323.20 59965.38 00:22:40.452 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7947.90 31.05 8052.28 1249.08 61281.82 00:22:40.452 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6552.40 25.60 9767.65 1369.82 57497.96 00:22:40.452 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 18250.19 71.29 3517.52 1214.65 45588.61 00:22:40.452 ======================================================== 00:22:40.452 Total : 38980.18 152.27 6572.72 1214.65 61281.82 00:22:40.452 00:22:40.452 11:00:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:22:40.452 11:00:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:40.452 11:00:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:40.452 11:00:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:40.452 11:00:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:40.452 11:00:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:40.453 11:00:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:40.453 rmmod nvme_tcp 00:22:40.453 rmmod nvme_fabrics 00:22:40.453 rmmod nvme_keyring 00:22:40.453 11:00:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:40.453 11:00:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:40.453 11:00:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:40.453 11:00:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2161731 ']' 00:22:40.453 11:00:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2161731 00:22:40.453 11:00:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2161731 ']' 00:22:40.453 11:00:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2161731 00:22:40.453 11:00:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:22:40.453 11:00:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:40.453 11:00:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2161731 00:22:40.453 11:00:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:40.453 11:00:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:40.453 11:00:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2161731' 00:22:40.453 killing process with pid 2161731 00:22:40.453 11:00:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2161731 00:22:40.453 11:00:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2161731 00:22:40.453 11:00:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:40.453 11:00:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:40.453 11:00:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:40.453 11:00:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:40.453 11:00:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:40.453 11:00:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.453 11:00:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:40.453 11:00:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.362 11:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:42.362 11:00:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:22:42.362 00:22:42.362 real 0m52.757s 00:22:42.362 user 2m49.731s 00:22:42.362 sys 0m10.850s 00:22:42.362 11:00:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:42.362 11:00:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:42.362 ************************************ 00:22:42.362 END TEST nvmf_perf_adq 00:22:42.362 ************************************ 00:22:42.362 11:00:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:42.362 11:00:59 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:42.362 11:00:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:42.362 11:00:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:42.362 11:00:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:42.362 ************************************ 00:22:42.362 START TEST nvmf_shutdown 00:22:42.362 ************************************ 00:22:42.362 11:00:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:42.622 * Looking for test storage... 00:22:42.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:42.622 11:00:59 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:42.622 11:00:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:42.623 ************************************ 00:22:42.623 START TEST nvmf_shutdown_tc1 00:22:42.623 ************************************ 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:42.623 11:00:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:50.757 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:50.757 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:50.757 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:50.757 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:50.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:50.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.568 ms 00:22:50.757 00:22:50.757 --- 10.0.0.2 ping statistics --- 00:22:50.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.757 rtt min/avg/max/mdev = 0.568/0.568/0.568/0.000 ms 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:50.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:50.757 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.364 ms 00:22:50.757 00:22:50.757 --- 10.0.0.1 ping statistics --- 00:22:50.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.757 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2168203 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2168203 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2168203 ']' 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:50.757 11:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:50.757 [2024-07-12 11:01:06.897164] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:50.757 [2024-07-12 11:01:06.897228] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:50.757 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.757 [2024-07-12 11:01:06.983511] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:50.757 [2024-07-12 11:01:07.080455] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:50.757 [2024-07-12 11:01:07.080504] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:50.757 [2024-07-12 11:01:07.080512] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:50.757 [2024-07-12 11:01:07.080519] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:50.757 [2024-07-12 11:01:07.080525] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:50.757 [2024-07-12 11:01:07.080685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.757 [2024-07-12 11:01:07.080863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:50.757 [2024-07-12 11:01:07.081020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.757 [2024-07-12 11:01:07.081021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:50.757 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:50.757 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:22:50.757 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:50.757 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:50.757 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:51.017 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.017 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:51.017 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.017 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:51.017 [2024-07-12 11:01:07.749508] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.017 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.017 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:51.017 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:51.017 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:51.017 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:51.017 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:51.017 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.017 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.017 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.017 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.017 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.017 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.017 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.017 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.017 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.017 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.017 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.017 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.017 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.017 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.017 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.017 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.018 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.018 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.018 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.018 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.018 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:51.018 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.018 11:01:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:51.018 Malloc1 00:22:51.018 [2024-07-12 11:01:07.863139] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.018 Malloc2 00:22:51.018 Malloc3 00:22:51.018 Malloc4 00:22:51.289 Malloc5 00:22:51.289 Malloc6 00:22:51.289 Malloc7 00:22:51.289 Malloc8 00:22:51.289 Malloc9 00:22:51.289 Malloc10 00:22:51.550 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.550 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:51.550 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:51.550 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:51.550 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2168452 00:22:51.550 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2168452 /var/tmp/bdevperf.sock 00:22:51.550 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2168452 ']' 00:22:51.550 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:51.550 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:51.550 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:51.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:51.550 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:51.550 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:51.550 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:51.550 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:51.550 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:51.550 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:51.550 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.550 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.550 { 00:22:51.550 "params": { 00:22:51.550 "name": "Nvme$subsystem", 00:22:51.550 "trtype": "$TEST_TRANSPORT", 00:22:51.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.550 "adrfam": "ipv4", 00:22:51.550 "trsvcid": "$NVMF_PORT", 00:22:51.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.550 "hdgst": ${hdgst:-false}, 00:22:51.550 "ddgst": ${ddgst:-false} 00:22:51.550 }, 00:22:51.550 "method": "bdev_nvme_attach_controller" 00:22:51.550 } 00:22:51.550 EOF 00:22:51.550 )") 00:22:51.550 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.550 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.550 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.550 { 00:22:51.550 "params": { 00:22:51.550 "name": "Nvme$subsystem", 00:22:51.550 "trtype": "$TEST_TRANSPORT", 00:22:51.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.550 "adrfam": "ipv4", 00:22:51.550 "trsvcid": "$NVMF_PORT", 00:22:51.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.550 "hdgst": ${hdgst:-false}, 00:22:51.550 "ddgst": ${ddgst:-false} 00:22:51.550 }, 00:22:51.550 "method": "bdev_nvme_attach_controller" 00:22:51.550 } 00:22:51.550 EOF 00:22:51.550 )") 00:22:51.550 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.550 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.550 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.550 { 00:22:51.550 "params": { 00:22:51.550 "name": "Nvme$subsystem", 00:22:51.550 "trtype": "$TEST_TRANSPORT", 00:22:51.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.550 "adrfam": "ipv4", 00:22:51.550 "trsvcid": "$NVMF_PORT", 00:22:51.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.551 "hdgst": ${hdgst:-false}, 00:22:51.551 "ddgst": ${ddgst:-false} 00:22:51.551 }, 00:22:51.551 "method": "bdev_nvme_attach_controller" 00:22:51.551 } 00:22:51.551 EOF 00:22:51.551 )") 00:22:51.551 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.551 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.551 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.551 { 00:22:51.551 "params": { 00:22:51.551 "name": "Nvme$subsystem", 00:22:51.551 "trtype": "$TEST_TRANSPORT", 00:22:51.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.551 "adrfam": "ipv4", 00:22:51.551 "trsvcid": "$NVMF_PORT", 00:22:51.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.551 "hdgst": ${hdgst:-false}, 00:22:51.551 "ddgst": ${ddgst:-false} 00:22:51.551 }, 00:22:51.551 "method": "bdev_nvme_attach_controller" 00:22:51.551 } 00:22:51.551 EOF 00:22:51.551 )") 00:22:51.551 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.551 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.551 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.551 { 00:22:51.551 "params": { 00:22:51.551 "name": "Nvme$subsystem", 00:22:51.551 "trtype": "$TEST_TRANSPORT", 00:22:51.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.551 "adrfam": "ipv4", 00:22:51.551 "trsvcid": "$NVMF_PORT", 00:22:51.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.551 "hdgst": ${hdgst:-false}, 00:22:51.551 "ddgst": ${ddgst:-false} 00:22:51.551 }, 00:22:51.551 "method": "bdev_nvme_attach_controller" 00:22:51.551 } 00:22:51.551 EOF 00:22:51.551 )") 00:22:51.551 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.551 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.551 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.551 { 00:22:51.551 "params": { 00:22:51.551 "name": "Nvme$subsystem", 00:22:51.551 "trtype": "$TEST_TRANSPORT", 00:22:51.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.551 "adrfam": "ipv4", 00:22:51.551 "trsvcid": "$NVMF_PORT", 00:22:51.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.551 "hdgst": ${hdgst:-false}, 00:22:51.551 "ddgst": ${ddgst:-false} 00:22:51.551 }, 00:22:51.551 "method": "bdev_nvme_attach_controller" 00:22:51.551 } 00:22:51.551 EOF 00:22:51.551 )") 00:22:51.551 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.551 [2024-07-12 11:01:08.372346] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:51.551 [2024-07-12 11:01:08.372421] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:51.551 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.551 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.551 { 00:22:51.551 "params": { 00:22:51.551 "name": "Nvme$subsystem", 00:22:51.551 "trtype": "$TEST_TRANSPORT", 00:22:51.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.551 "adrfam": "ipv4", 00:22:51.551 "trsvcid": "$NVMF_PORT", 00:22:51.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.551 "hdgst": ${hdgst:-false}, 00:22:51.551 "ddgst": ${ddgst:-false} 00:22:51.551 }, 00:22:51.551 "method": "bdev_nvme_attach_controller" 00:22:51.551 } 00:22:51.551 EOF 00:22:51.551 )") 00:22:51.551 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.551 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.551 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.551 { 00:22:51.551 "params": { 00:22:51.551 "name": "Nvme$subsystem", 00:22:51.551 "trtype": "$TEST_TRANSPORT", 00:22:51.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.551 "adrfam": "ipv4", 00:22:51.551 "trsvcid": "$NVMF_PORT", 00:22:51.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.551 "hdgst": ${hdgst:-false}, 00:22:51.551 "ddgst": ${ddgst:-false} 00:22:51.551 }, 00:22:51.551 "method": "bdev_nvme_attach_controller" 00:22:51.551 } 00:22:51.551 EOF 00:22:51.551 )") 00:22:51.551 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.551 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.551 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.551 { 00:22:51.551 "params": { 00:22:51.551 "name": "Nvme$subsystem", 00:22:51.551 "trtype": "$TEST_TRANSPORT", 00:22:51.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.551 "adrfam": "ipv4", 00:22:51.551 "trsvcid": "$NVMF_PORT", 00:22:51.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.551 "hdgst": ${hdgst:-false}, 00:22:51.551 "ddgst": ${ddgst:-false} 00:22:51.551 }, 00:22:51.551 "method": "bdev_nvme_attach_controller" 00:22:51.551 } 00:22:51.551 EOF 00:22:51.551 )") 00:22:51.551 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.551 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.551 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.551 { 00:22:51.551 "params": { 00:22:51.551 "name": "Nvme$subsystem", 00:22:51.551 "trtype": "$TEST_TRANSPORT", 00:22:51.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.551 "adrfam": "ipv4", 00:22:51.551 "trsvcid": "$NVMF_PORT", 00:22:51.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.551 "hdgst": ${hdgst:-false}, 00:22:51.551 "ddgst": ${ddgst:-false} 00:22:51.551 }, 00:22:51.551 "method": "bdev_nvme_attach_controller" 00:22:51.551 } 00:22:51.551 EOF 00:22:51.551 )") 00:22:51.551 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.551 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.551 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:51.551 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:51.551 11:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:51.551 "params": { 00:22:51.551 "name": "Nvme1", 00:22:51.551 "trtype": "tcp", 00:22:51.551 "traddr": "10.0.0.2", 00:22:51.551 "adrfam": "ipv4", 00:22:51.551 "trsvcid": "4420", 00:22:51.551 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.551 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:51.551 "hdgst": false, 00:22:51.551 "ddgst": false 00:22:51.551 }, 00:22:51.551 "method": "bdev_nvme_attach_controller" 00:22:51.551 },{ 00:22:51.551 "params": { 00:22:51.551 "name": "Nvme2", 00:22:51.551 "trtype": "tcp", 00:22:51.551 "traddr": "10.0.0.2", 00:22:51.551 "adrfam": "ipv4", 00:22:51.551 "trsvcid": "4420", 00:22:51.551 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:51.551 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:51.551 "hdgst": false, 00:22:51.551 "ddgst": false 00:22:51.551 }, 00:22:51.551 "method": "bdev_nvme_attach_controller" 00:22:51.551 },{ 00:22:51.551 "params": { 00:22:51.551 "name": "Nvme3", 00:22:51.551 "trtype": "tcp", 00:22:51.551 "traddr": "10.0.0.2", 00:22:51.551 "adrfam": "ipv4", 00:22:51.551 "trsvcid": "4420", 00:22:51.551 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:51.551 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:51.551 "hdgst": false, 00:22:51.551 "ddgst": false 00:22:51.551 }, 00:22:51.551 "method": "bdev_nvme_attach_controller" 00:22:51.551 },{ 00:22:51.551 "params": { 00:22:51.551 "name": "Nvme4", 00:22:51.551 "trtype": "tcp", 00:22:51.551 "traddr": "10.0.0.2", 00:22:51.551 "adrfam": "ipv4", 00:22:51.551 "trsvcid": "4420", 00:22:51.551 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:51.551 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:51.551 "hdgst": false, 00:22:51.551 "ddgst": false 00:22:51.551 }, 00:22:51.551 "method": "bdev_nvme_attach_controller" 00:22:51.551 },{ 00:22:51.551 "params": { 00:22:51.551 "name": "Nvme5", 00:22:51.551 "trtype": "tcp", 00:22:51.551 "traddr": "10.0.0.2", 00:22:51.551 "adrfam": "ipv4", 00:22:51.551 "trsvcid": "4420", 00:22:51.551 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:51.551 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:51.551 "hdgst": false, 00:22:51.551 "ddgst": false 00:22:51.551 }, 00:22:51.551 "method": "bdev_nvme_attach_controller" 00:22:51.551 },{ 00:22:51.551 "params": { 00:22:51.551 "name": "Nvme6", 00:22:51.551 "trtype": "tcp", 00:22:51.551 "traddr": "10.0.0.2", 00:22:51.551 "adrfam": "ipv4", 00:22:51.551 "trsvcid": "4420", 00:22:51.551 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:51.551 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:51.551 "hdgst": false, 00:22:51.551 "ddgst": false 00:22:51.551 }, 00:22:51.551 "method": "bdev_nvme_attach_controller" 00:22:51.551 },{ 00:22:51.551 "params": { 00:22:51.551 "name": "Nvme7", 00:22:51.552 "trtype": "tcp", 00:22:51.552 "traddr": "10.0.0.2", 00:22:51.552 "adrfam": "ipv4", 00:22:51.552 "trsvcid": "4420", 00:22:51.552 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:51.552 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:51.552 "hdgst": false, 00:22:51.552 "ddgst": false 00:22:51.552 }, 00:22:51.552 "method": "bdev_nvme_attach_controller" 00:22:51.552 },{ 00:22:51.552 "params": { 00:22:51.552 "name": "Nvme8", 00:22:51.552 "trtype": "tcp", 00:22:51.552 "traddr": "10.0.0.2", 00:22:51.552 "adrfam": "ipv4", 00:22:51.552 "trsvcid": "4420", 00:22:51.552 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:51.552 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:51.552 "hdgst": false, 00:22:51.552 "ddgst": false 00:22:51.552 }, 00:22:51.552 "method": "bdev_nvme_attach_controller" 00:22:51.552 },{ 00:22:51.552 "params": { 00:22:51.552 "name": "Nvme9", 00:22:51.552 "trtype": "tcp", 00:22:51.552 "traddr": "10.0.0.2", 00:22:51.552 "adrfam": "ipv4", 00:22:51.552 "trsvcid": "4420", 00:22:51.552 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:51.552 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:51.552 "hdgst": false, 00:22:51.552 "ddgst": false 00:22:51.552 }, 00:22:51.552 "method": "bdev_nvme_attach_controller" 00:22:51.552 },{ 00:22:51.552 "params": { 00:22:51.552 "name": "Nvme10", 00:22:51.552 "trtype": "tcp", 00:22:51.552 "traddr": "10.0.0.2", 00:22:51.552 "adrfam": "ipv4", 00:22:51.552 "trsvcid": "4420", 00:22:51.552 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:51.552 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:51.552 "hdgst": false, 00:22:51.552 "ddgst": false 00:22:51.552 }, 00:22:51.552 "method": "bdev_nvme_attach_controller" 00:22:51.552 }' 00:22:51.552 [2024-07-12 11:01:08.456521] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.813 [2024-07-12 11:01:08.553392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.195 11:01:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:53.195 11:01:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:22:53.195 11:01:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:53.195 11:01:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.195 11:01:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:53.195 11:01:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.195 11:01:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2168452 00:22:53.195 11:01:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:22:53.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2168452 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:53.195 11:01:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:22:54.137 11:01:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2168203 00:22:54.137 11:01:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:54.137 11:01:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:54.137 11:01:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:54.137 11:01:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:54.137 11:01:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:54.137 11:01:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:54.137 { 00:22:54.137 "params": { 00:22:54.137 "name": "Nvme$subsystem", 00:22:54.137 "trtype": "$TEST_TRANSPORT", 00:22:54.137 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.137 "adrfam": "ipv4", 00:22:54.137 "trsvcid": "$NVMF_PORT", 00:22:54.137 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.137 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.137 "hdgst": ${hdgst:-false}, 00:22:54.137 "ddgst": ${ddgst:-false} 00:22:54.137 }, 00:22:54.137 "method": "bdev_nvme_attach_controller" 00:22:54.137 } 00:22:54.137 EOF 00:22:54.137 )") 00:22:54.137 11:01:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:54.137 11:01:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:54.137 11:01:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:54.137 { 00:22:54.137 "params": { 00:22:54.137 "name": "Nvme$subsystem", 00:22:54.137 "trtype": "$TEST_TRANSPORT", 00:22:54.137 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.137 "adrfam": "ipv4", 00:22:54.137 "trsvcid": "$NVMF_PORT", 00:22:54.137 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.137 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.137 "hdgst": ${hdgst:-false}, 00:22:54.137 "ddgst": ${ddgst:-false} 00:22:54.137 }, 00:22:54.137 "method": "bdev_nvme_attach_controller" 00:22:54.137 } 00:22:54.137 EOF 00:22:54.137 )") 00:22:54.137 11:01:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:54.137 11:01:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:54.137 11:01:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:54.137 { 00:22:54.137 "params": { 00:22:54.137 "name": "Nvme$subsystem", 00:22:54.137 "trtype": "$TEST_TRANSPORT", 00:22:54.137 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.137 "adrfam": "ipv4", 00:22:54.137 "trsvcid": "$NVMF_PORT", 00:22:54.137 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.137 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.137 "hdgst": ${hdgst:-false}, 00:22:54.137 "ddgst": ${ddgst:-false} 00:22:54.137 }, 00:22:54.137 "method": "bdev_nvme_attach_controller" 00:22:54.137 } 00:22:54.137 EOF 00:22:54.137 )") 00:22:54.137 11:01:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:54.137 11:01:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:54.137 11:01:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:54.137 { 00:22:54.137 "params": { 00:22:54.137 "name": "Nvme$subsystem", 00:22:54.137 "trtype": "$TEST_TRANSPORT", 00:22:54.137 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.137 "adrfam": "ipv4", 00:22:54.137 "trsvcid": "$NVMF_PORT", 00:22:54.137 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.137 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.137 "hdgst": ${hdgst:-false}, 00:22:54.137 "ddgst": ${ddgst:-false} 00:22:54.137 }, 00:22:54.137 "method": "bdev_nvme_attach_controller" 00:22:54.137 } 00:22:54.137 EOF 00:22:54.137 )") 00:22:54.137 11:01:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:54.137 11:01:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:54.137 11:01:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:54.137 { 00:22:54.137 "params": { 00:22:54.137 "name": "Nvme$subsystem", 00:22:54.137 "trtype": "$TEST_TRANSPORT", 00:22:54.137 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.137 "adrfam": "ipv4", 00:22:54.137 "trsvcid": "$NVMF_PORT", 00:22:54.137 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.137 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.137 "hdgst": ${hdgst:-false}, 00:22:54.137 "ddgst": ${ddgst:-false} 00:22:54.137 }, 00:22:54.137 "method": "bdev_nvme_attach_controller" 00:22:54.137 } 00:22:54.137 EOF 00:22:54.137 )") 00:22:54.137 11:01:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:54.137 11:01:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:54.137 11:01:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:54.137 { 00:22:54.138 "params": { 00:22:54.138 "name": "Nvme$subsystem", 00:22:54.138 "trtype": "$TEST_TRANSPORT", 00:22:54.138 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.138 "adrfam": "ipv4", 00:22:54.138 "trsvcid": "$NVMF_PORT", 00:22:54.138 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.138 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.138 "hdgst": ${hdgst:-false}, 00:22:54.138 "ddgst": ${ddgst:-false} 00:22:54.138 }, 00:22:54.138 "method": "bdev_nvme_attach_controller" 00:22:54.138 } 00:22:54.138 EOF 00:22:54.138 )") 00:22:54.138 11:01:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:54.138 11:01:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:54.138 11:01:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:54.138 { 00:22:54.138 "params": { 00:22:54.138 "name": "Nvme$subsystem", 00:22:54.138 "trtype": "$TEST_TRANSPORT", 00:22:54.138 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.138 "adrfam": "ipv4", 00:22:54.138 "trsvcid": "$NVMF_PORT", 00:22:54.138 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.138 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.138 "hdgst": ${hdgst:-false}, 00:22:54.138 "ddgst": ${ddgst:-false} 00:22:54.138 }, 00:22:54.138 "method": "bdev_nvme_attach_controller" 00:22:54.138 } 00:22:54.138 EOF 00:22:54.138 )") 00:22:54.138 [2024-07-12 11:01:11.002424] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:54.138 [2024-07-12 11:01:11.002481] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2169104 ] 00:22:54.138 11:01:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:54.138 11:01:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:54.138 11:01:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:54.138 { 00:22:54.138 "params": { 00:22:54.138 "name": "Nvme$subsystem", 00:22:54.138 "trtype": "$TEST_TRANSPORT", 00:22:54.138 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.138 "adrfam": "ipv4", 00:22:54.138 "trsvcid": "$NVMF_PORT", 00:22:54.138 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.138 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.138 "hdgst": ${hdgst:-false}, 00:22:54.138 "ddgst": ${ddgst:-false} 00:22:54.138 }, 00:22:54.138 "method": "bdev_nvme_attach_controller" 00:22:54.138 } 00:22:54.138 EOF 00:22:54.138 )") 00:22:54.138 11:01:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:54.138 11:01:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:54.138 11:01:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:54.138 { 00:22:54.138 "params": { 00:22:54.138 "name": "Nvme$subsystem", 00:22:54.138 "trtype": "$TEST_TRANSPORT", 00:22:54.138 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.138 "adrfam": "ipv4", 00:22:54.138 "trsvcid": "$NVMF_PORT", 00:22:54.138 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.138 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.138 "hdgst": ${hdgst:-false}, 00:22:54.138 "ddgst": ${ddgst:-false} 00:22:54.138 }, 00:22:54.138 "method": "bdev_nvme_attach_controller" 00:22:54.138 } 00:22:54.138 EOF 00:22:54.138 )") 00:22:54.138 11:01:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:54.138 11:01:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:54.138 11:01:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:54.138 { 00:22:54.138 "params": { 00:22:54.138 "name": "Nvme$subsystem", 00:22:54.138 "trtype": "$TEST_TRANSPORT", 00:22:54.138 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.138 "adrfam": "ipv4", 00:22:54.138 "trsvcid": "$NVMF_PORT", 00:22:54.138 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.138 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.138 "hdgst": ${hdgst:-false}, 00:22:54.138 "ddgst": ${ddgst:-false} 00:22:54.138 }, 00:22:54.138 "method": "bdev_nvme_attach_controller" 00:22:54.138 } 00:22:54.138 EOF 00:22:54.138 )") 00:22:54.138 11:01:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:54.138 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.138 11:01:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:54.138 11:01:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:54.138 11:01:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:54.138 "params": { 00:22:54.138 "name": "Nvme1", 00:22:54.138 "trtype": "tcp", 00:22:54.138 "traddr": "10.0.0.2", 00:22:54.138 "adrfam": "ipv4", 00:22:54.138 "trsvcid": "4420", 00:22:54.138 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.138 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:54.138 "hdgst": false, 00:22:54.138 "ddgst": false 00:22:54.138 }, 00:22:54.138 "method": "bdev_nvme_attach_controller" 00:22:54.138 },{ 00:22:54.138 "params": { 00:22:54.138 "name": "Nvme2", 00:22:54.138 "trtype": "tcp", 00:22:54.138 "traddr": "10.0.0.2", 00:22:54.138 "adrfam": "ipv4", 00:22:54.138 "trsvcid": "4420", 00:22:54.138 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:54.138 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:54.138 "hdgst": false, 00:22:54.138 "ddgst": false 00:22:54.138 }, 00:22:54.138 "method": "bdev_nvme_attach_controller" 00:22:54.138 },{ 00:22:54.138 "params": { 00:22:54.138 "name": "Nvme3", 00:22:54.138 "trtype": "tcp", 00:22:54.138 "traddr": "10.0.0.2", 00:22:54.138 "adrfam": "ipv4", 00:22:54.138 "trsvcid": "4420", 00:22:54.138 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:54.138 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:54.138 "hdgst": false, 00:22:54.138 "ddgst": false 00:22:54.138 }, 00:22:54.138 "method": "bdev_nvme_attach_controller" 00:22:54.138 },{ 00:22:54.138 "params": { 00:22:54.138 "name": "Nvme4", 00:22:54.138 "trtype": "tcp", 00:22:54.138 "traddr": "10.0.0.2", 00:22:54.138 "adrfam": "ipv4", 00:22:54.138 "trsvcid": "4420", 00:22:54.138 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:54.138 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:54.138 "hdgst": false, 00:22:54.138 "ddgst": false 00:22:54.138 }, 00:22:54.138 "method": "bdev_nvme_attach_controller" 00:22:54.138 },{ 00:22:54.138 "params": { 00:22:54.138 "name": "Nvme5", 00:22:54.138 "trtype": "tcp", 00:22:54.138 "traddr": "10.0.0.2", 00:22:54.138 "adrfam": "ipv4", 00:22:54.138 "trsvcid": "4420", 00:22:54.138 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:54.138 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:54.138 "hdgst": false, 00:22:54.138 "ddgst": false 00:22:54.138 }, 00:22:54.138 "method": "bdev_nvme_attach_controller" 00:22:54.138 },{ 00:22:54.138 "params": { 00:22:54.138 "name": "Nvme6", 00:22:54.138 "trtype": "tcp", 00:22:54.138 "traddr": "10.0.0.2", 00:22:54.138 "adrfam": "ipv4", 00:22:54.138 "trsvcid": "4420", 00:22:54.138 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:54.138 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:54.138 "hdgst": false, 00:22:54.138 "ddgst": false 00:22:54.138 }, 00:22:54.138 "method": "bdev_nvme_attach_controller" 00:22:54.138 },{ 00:22:54.138 "params": { 00:22:54.138 "name": "Nvme7", 00:22:54.138 "trtype": "tcp", 00:22:54.138 "traddr": "10.0.0.2", 00:22:54.138 "adrfam": "ipv4", 00:22:54.138 "trsvcid": "4420", 00:22:54.138 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:54.138 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:54.138 "hdgst": false, 00:22:54.138 "ddgst": false 00:22:54.138 }, 00:22:54.138 "method": "bdev_nvme_attach_controller" 00:22:54.138 },{ 00:22:54.138 "params": { 00:22:54.138 "name": "Nvme8", 00:22:54.138 "trtype": "tcp", 00:22:54.138 "traddr": "10.0.0.2", 00:22:54.138 "adrfam": "ipv4", 00:22:54.138 "trsvcid": "4420", 00:22:54.138 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:54.138 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:54.138 "hdgst": false, 00:22:54.138 "ddgst": false 00:22:54.138 }, 00:22:54.138 "method": "bdev_nvme_attach_controller" 00:22:54.138 },{ 00:22:54.138 "params": { 00:22:54.138 "name": "Nvme9", 00:22:54.138 "trtype": "tcp", 00:22:54.138 "traddr": "10.0.0.2", 00:22:54.138 "adrfam": "ipv4", 00:22:54.138 "trsvcid": "4420", 00:22:54.138 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:54.138 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:54.138 "hdgst": false, 00:22:54.138 "ddgst": false 00:22:54.138 }, 00:22:54.138 "method": "bdev_nvme_attach_controller" 00:22:54.138 },{ 00:22:54.138 "params": { 00:22:54.138 "name": "Nvme10", 00:22:54.138 "trtype": "tcp", 00:22:54.138 "traddr": "10.0.0.2", 00:22:54.138 "adrfam": "ipv4", 00:22:54.138 "trsvcid": "4420", 00:22:54.138 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:54.138 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:54.138 "hdgst": false, 00:22:54.138 "ddgst": false 00:22:54.138 }, 00:22:54.138 "method": "bdev_nvme_attach_controller" 00:22:54.138 }' 00:22:54.138 [2024-07-12 11:01:11.081302] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.398 [2024-07-12 11:01:11.145223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.782 Running I/O for 1 seconds... 00:22:56.721 00:22:56.721 Latency(us) 00:22:56.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.721 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.721 Verification LBA range: start 0x0 length 0x400 00:22:56.721 Nvme1n1 : 1.09 235.89 14.74 0.00 0.00 268282.88 21080.75 223696.21 00:22:56.721 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.721 Verification LBA range: start 0x0 length 0x400 00:22:56.721 Nvme2n1 : 1.14 224.78 14.05 0.00 0.00 276913.92 18022.40 239424.85 00:22:56.721 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.721 Verification LBA range: start 0x0 length 0x400 00:22:56.721 Nvme3n1 : 1.07 242.90 15.18 0.00 0.00 249867.67 6253.23 242920.11 00:22:56.721 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.721 Verification LBA range: start 0x0 length 0x400 00:22:56.721 Nvme4n1 : 1.15 222.77 13.92 0.00 0.00 269560.75 21299.20 267386.88 00:22:56.721 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.721 Verification LBA range: start 0x0 length 0x400 00:22:56.721 Nvme5n1 : 1.18 216.78 13.55 0.00 0.00 272848.21 21299.20 253405.87 00:22:56.721 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.721 Verification LBA range: start 0x0 length 0x400 00:22:56.721 Nvme6n1 : 1.15 223.01 13.94 0.00 0.00 259533.65 21736.11 253405.87 00:22:56.721 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.721 Verification LBA range: start 0x0 length 0x400 00:22:56.721 Nvme7n1 : 1.19 268.51 16.78 0.00 0.00 212683.78 16602.45 274377.39 00:22:56.721 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.721 Verification LBA range: start 0x0 length 0x400 00:22:56.721 Nvme8n1 : 1.19 269.53 16.85 0.00 0.00 207954.94 19223.89 249910.61 00:22:56.721 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.721 Verification LBA range: start 0x0 length 0x400 00:22:56.721 Nvme9n1 : 1.20 319.40 19.96 0.00 0.00 172344.92 11851.09 241172.48 00:22:56.721 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.721 Verification LBA range: start 0x0 length 0x400 00:22:56.721 Nvme10n1 : 1.18 217.26 13.58 0.00 0.00 248030.29 23920.64 272629.76 00:22:56.721 =================================================================================================================== 00:22:56.721 Total : 2440.83 152.55 0.00 0.00 239074.96 6253.23 274377.39 00:22:56.981 11:01:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:22:56.981 11:01:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:56.981 11:01:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:56.981 11:01:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:56.981 11:01:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:56.981 11:01:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:56.981 11:01:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:22:56.981 11:01:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:56.981 11:01:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:22:56.981 11:01:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:56.981 11:01:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:56.981 rmmod nvme_tcp 00:22:56.981 rmmod nvme_fabrics 00:22:56.981 rmmod nvme_keyring 00:22:56.981 11:01:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:56.981 11:01:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:22:56.981 11:01:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:22:56.981 11:01:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2168203 ']' 00:22:56.981 11:01:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2168203 00:22:56.981 11:01:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 2168203 ']' 00:22:56.981 11:01:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 2168203 00:22:56.981 11:01:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:22:56.981 11:01:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:56.981 11:01:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2168203 00:22:56.981 11:01:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:56.981 11:01:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:56.981 11:01:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2168203' 00:22:56.981 killing process with pid 2168203 00:22:56.981 11:01:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 2168203 00:22:56.981 11:01:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 2168203 00:22:57.241 11:01:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:57.241 11:01:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:57.241 11:01:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:57.241 11:01:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:57.241 11:01:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:57.241 11:01:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.241 11:01:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:57.241 11:01:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:59.780 00:22:59.780 real 0m16.778s 00:22:59.780 user 0m34.334s 00:22:59.780 sys 0m6.726s 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:59.780 ************************************ 00:22:59.780 END TEST nvmf_shutdown_tc1 00:22:59.780 ************************************ 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:59.780 ************************************ 00:22:59.780 START TEST nvmf_shutdown_tc2 00:22:59.780 ************************************ 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:59.780 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:59.781 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:59.781 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:59.781 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:59.781 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:59.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:59.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.548 ms 00:22:59.781 00:22:59.781 --- 10.0.0.2 ping statistics --- 00:22:59.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.781 rtt min/avg/max/mdev = 0.548/0.548/0.548/0.000 ms 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:59.781 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:59.781 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:22:59.781 00:22:59.781 --- 10.0.0.1 ping statistics --- 00:22:59.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.781 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2170222 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2170222 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2170222 ']' 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:59.781 11:01:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:59.781 [2024-07-12 11:01:16.737455] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:59.781 [2024-07-12 11:01:16.737507] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.039 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.039 [2024-07-12 11:01:16.819718] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:00.039 [2024-07-12 11:01:16.875638] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.039 [2024-07-12 11:01:16.875671] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.039 [2024-07-12 11:01:16.875676] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.039 [2024-07-12 11:01:16.875681] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.039 [2024-07-12 11:01:16.875685] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.039 [2024-07-12 11:01:16.875831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:00.039 [2024-07-12 11:01:16.875967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:00.039 [2024-07-12 11:01:16.876117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.039 [2024-07-12 11:01:16.876119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:00.608 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:00.608 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:00.608 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:00.608 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:00.608 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.608 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:00.608 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:00.608 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.608 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.608 [2024-07-12 11:01:17.556339] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.608 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.608 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:00.608 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:00.608 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:00.608 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.608 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:00.608 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.608 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:00.608 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.608 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:00.608 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.608 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:00.608 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.608 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:00.868 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.868 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:00.868 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.868 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:00.868 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.868 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:00.868 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.868 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:00.868 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.868 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:00.868 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.868 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:00.868 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:00.868 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.868 11:01:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.868 Malloc1 00:23:00.868 [2024-07-12 11:01:17.655106] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:00.868 Malloc2 00:23:00.868 Malloc3 00:23:00.868 Malloc4 00:23:00.868 Malloc5 00:23:00.868 Malloc6 00:23:01.129 Malloc7 00:23:01.129 Malloc8 00:23:01.129 Malloc9 00:23:01.129 Malloc10 00:23:01.129 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.129 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:01.129 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:01.129 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:01.129 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2170600 00:23:01.129 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2170600 /var/tmp/bdevperf.sock 00:23:01.129 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2170600 ']' 00:23:01.129 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:01.129 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:01.129 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:01.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:01.129 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:01.129 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:01.129 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:01.129 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:01.129 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:23:01.129 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:23:01.129 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.129 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.129 { 00:23:01.129 "params": { 00:23:01.129 "name": "Nvme$subsystem", 00:23:01.129 "trtype": "$TEST_TRANSPORT", 00:23:01.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.129 "adrfam": "ipv4", 00:23:01.129 "trsvcid": "$NVMF_PORT", 00:23:01.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.129 "hdgst": ${hdgst:-false}, 00:23:01.129 "ddgst": ${ddgst:-false} 00:23:01.129 }, 00:23:01.129 "method": "bdev_nvme_attach_controller" 00:23:01.129 } 00:23:01.129 EOF 00:23:01.129 )") 00:23:01.129 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:01.129 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.129 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.129 { 00:23:01.129 "params": { 00:23:01.129 "name": "Nvme$subsystem", 00:23:01.129 "trtype": "$TEST_TRANSPORT", 00:23:01.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.129 "adrfam": "ipv4", 00:23:01.129 "trsvcid": "$NVMF_PORT", 00:23:01.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.129 "hdgst": ${hdgst:-false}, 00:23:01.129 "ddgst": ${ddgst:-false} 00:23:01.129 }, 00:23:01.129 "method": "bdev_nvme_attach_controller" 00:23:01.129 } 00:23:01.129 EOF 00:23:01.129 )") 00:23:01.129 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:01.129 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.129 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.129 { 00:23:01.129 "params": { 00:23:01.129 "name": "Nvme$subsystem", 00:23:01.129 "trtype": "$TEST_TRANSPORT", 00:23:01.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.129 "adrfam": "ipv4", 00:23:01.129 "trsvcid": "$NVMF_PORT", 00:23:01.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.129 "hdgst": ${hdgst:-false}, 00:23:01.129 "ddgst": ${ddgst:-false} 00:23:01.129 }, 00:23:01.129 "method": "bdev_nvme_attach_controller" 00:23:01.129 } 00:23:01.129 EOF 00:23:01.129 )") 00:23:01.129 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:01.129 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.129 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.130 { 00:23:01.130 "params": { 00:23:01.130 "name": "Nvme$subsystem", 00:23:01.130 "trtype": "$TEST_TRANSPORT", 00:23:01.130 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.130 "adrfam": "ipv4", 00:23:01.130 "trsvcid": "$NVMF_PORT", 00:23:01.130 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.130 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.130 "hdgst": ${hdgst:-false}, 00:23:01.130 "ddgst": ${ddgst:-false} 00:23:01.130 }, 00:23:01.130 "method": "bdev_nvme_attach_controller" 00:23:01.130 } 00:23:01.130 EOF 00:23:01.130 )") 00:23:01.130 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:01.130 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.130 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.130 { 00:23:01.130 "params": { 00:23:01.130 "name": "Nvme$subsystem", 00:23:01.130 "trtype": "$TEST_TRANSPORT", 00:23:01.130 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.130 "adrfam": "ipv4", 00:23:01.130 "trsvcid": "$NVMF_PORT", 00:23:01.130 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.130 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.130 "hdgst": ${hdgst:-false}, 00:23:01.130 "ddgst": ${ddgst:-false} 00:23:01.130 }, 00:23:01.130 "method": "bdev_nvme_attach_controller" 00:23:01.130 } 00:23:01.130 EOF 00:23:01.130 )") 00:23:01.130 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:01.130 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.130 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.130 { 00:23:01.130 "params": { 00:23:01.130 "name": "Nvme$subsystem", 00:23:01.130 "trtype": "$TEST_TRANSPORT", 00:23:01.130 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.130 "adrfam": "ipv4", 00:23:01.130 "trsvcid": "$NVMF_PORT", 00:23:01.130 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.130 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.130 "hdgst": ${hdgst:-false}, 00:23:01.130 "ddgst": ${ddgst:-false} 00:23:01.130 }, 00:23:01.130 "method": "bdev_nvme_attach_controller" 00:23:01.130 } 00:23:01.130 EOF 00:23:01.130 )") 00:23:01.130 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:01.130 [2024-07-12 11:01:18.099485] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:01.130 [2024-07-12 11:01:18.099538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2170600 ] 00:23:01.130 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.130 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.130 { 00:23:01.130 "params": { 00:23:01.130 "name": "Nvme$subsystem", 00:23:01.130 "trtype": "$TEST_TRANSPORT", 00:23:01.130 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.130 "adrfam": "ipv4", 00:23:01.130 "trsvcid": "$NVMF_PORT", 00:23:01.130 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.130 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.130 "hdgst": ${hdgst:-false}, 00:23:01.130 "ddgst": ${ddgst:-false} 00:23:01.130 }, 00:23:01.130 "method": "bdev_nvme_attach_controller" 00:23:01.130 } 00:23:01.130 EOF 00:23:01.130 )") 00:23:01.130 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:01.130 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.130 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.130 { 00:23:01.130 "params": { 00:23:01.130 "name": "Nvme$subsystem", 00:23:01.130 "trtype": "$TEST_TRANSPORT", 00:23:01.130 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.130 "adrfam": "ipv4", 00:23:01.130 "trsvcid": "$NVMF_PORT", 00:23:01.130 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.130 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.130 "hdgst": ${hdgst:-false}, 00:23:01.130 "ddgst": ${ddgst:-false} 00:23:01.130 }, 00:23:01.130 "method": "bdev_nvme_attach_controller" 00:23:01.130 } 00:23:01.130 EOF 00:23:01.130 )") 00:23:01.130 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:01.391 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.391 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.391 { 00:23:01.391 "params": { 00:23:01.391 "name": "Nvme$subsystem", 00:23:01.391 "trtype": "$TEST_TRANSPORT", 00:23:01.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.391 "adrfam": "ipv4", 00:23:01.391 "trsvcid": "$NVMF_PORT", 00:23:01.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.391 "hdgst": ${hdgst:-false}, 00:23:01.391 "ddgst": ${ddgst:-false} 00:23:01.391 }, 00:23:01.391 "method": "bdev_nvme_attach_controller" 00:23:01.391 } 00:23:01.391 EOF 00:23:01.391 )") 00:23:01.391 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:01.391 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.391 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.391 { 00:23:01.391 "params": { 00:23:01.391 "name": "Nvme$subsystem", 00:23:01.391 "trtype": "$TEST_TRANSPORT", 00:23:01.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.391 "adrfam": "ipv4", 00:23:01.391 "trsvcid": "$NVMF_PORT", 00:23:01.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.391 "hdgst": ${hdgst:-false}, 00:23:01.391 "ddgst": ${ddgst:-false} 00:23:01.391 }, 00:23:01.391 "method": "bdev_nvme_attach_controller" 00:23:01.391 } 00:23:01.391 EOF 00:23:01.391 )") 00:23:01.391 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.391 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:01.391 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:23:01.391 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:23:01.391 11:01:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:01.391 "params": { 00:23:01.391 "name": "Nvme1", 00:23:01.391 "trtype": "tcp", 00:23:01.391 "traddr": "10.0.0.2", 00:23:01.391 "adrfam": "ipv4", 00:23:01.391 "trsvcid": "4420", 00:23:01.391 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:01.391 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:01.391 "hdgst": false, 00:23:01.391 "ddgst": false 00:23:01.391 }, 00:23:01.391 "method": "bdev_nvme_attach_controller" 00:23:01.391 },{ 00:23:01.391 "params": { 00:23:01.391 "name": "Nvme2", 00:23:01.391 "trtype": "tcp", 00:23:01.391 "traddr": "10.0.0.2", 00:23:01.391 "adrfam": "ipv4", 00:23:01.391 "trsvcid": "4420", 00:23:01.391 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:01.391 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:01.391 "hdgst": false, 00:23:01.391 "ddgst": false 00:23:01.391 }, 00:23:01.391 "method": "bdev_nvme_attach_controller" 00:23:01.391 },{ 00:23:01.391 "params": { 00:23:01.391 "name": "Nvme3", 00:23:01.391 "trtype": "tcp", 00:23:01.391 "traddr": "10.0.0.2", 00:23:01.391 "adrfam": "ipv4", 00:23:01.391 "trsvcid": "4420", 00:23:01.391 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:01.391 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:01.391 "hdgst": false, 00:23:01.391 "ddgst": false 00:23:01.391 }, 00:23:01.391 "method": "bdev_nvme_attach_controller" 00:23:01.391 },{ 00:23:01.391 "params": { 00:23:01.391 "name": "Nvme4", 00:23:01.391 "trtype": "tcp", 00:23:01.391 "traddr": "10.0.0.2", 00:23:01.391 "adrfam": "ipv4", 00:23:01.391 "trsvcid": "4420", 00:23:01.391 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:01.391 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:01.391 "hdgst": false, 00:23:01.391 "ddgst": false 00:23:01.391 }, 00:23:01.391 "method": "bdev_nvme_attach_controller" 00:23:01.391 },{ 00:23:01.391 "params": { 00:23:01.391 "name": "Nvme5", 00:23:01.391 "trtype": "tcp", 00:23:01.391 "traddr": "10.0.0.2", 00:23:01.391 "adrfam": "ipv4", 00:23:01.391 "trsvcid": "4420", 00:23:01.391 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:01.391 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:01.391 "hdgst": false, 00:23:01.391 "ddgst": false 00:23:01.391 }, 00:23:01.391 "method": "bdev_nvme_attach_controller" 00:23:01.391 },{ 00:23:01.391 "params": { 00:23:01.391 "name": "Nvme6", 00:23:01.391 "trtype": "tcp", 00:23:01.391 "traddr": "10.0.0.2", 00:23:01.391 "adrfam": "ipv4", 00:23:01.391 "trsvcid": "4420", 00:23:01.391 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:01.391 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:01.391 "hdgst": false, 00:23:01.391 "ddgst": false 00:23:01.391 }, 00:23:01.391 "method": "bdev_nvme_attach_controller" 00:23:01.391 },{ 00:23:01.391 "params": { 00:23:01.391 "name": "Nvme7", 00:23:01.391 "trtype": "tcp", 00:23:01.391 "traddr": "10.0.0.2", 00:23:01.391 "adrfam": "ipv4", 00:23:01.391 "trsvcid": "4420", 00:23:01.391 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:01.391 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:01.391 "hdgst": false, 00:23:01.391 "ddgst": false 00:23:01.391 }, 00:23:01.391 "method": "bdev_nvme_attach_controller" 00:23:01.391 },{ 00:23:01.391 "params": { 00:23:01.391 "name": "Nvme8", 00:23:01.391 "trtype": "tcp", 00:23:01.391 "traddr": "10.0.0.2", 00:23:01.391 "adrfam": "ipv4", 00:23:01.391 "trsvcid": "4420", 00:23:01.391 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:01.391 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:01.391 "hdgst": false, 00:23:01.391 "ddgst": false 00:23:01.391 }, 00:23:01.391 "method": "bdev_nvme_attach_controller" 00:23:01.391 },{ 00:23:01.391 "params": { 00:23:01.391 "name": "Nvme9", 00:23:01.391 "trtype": "tcp", 00:23:01.391 "traddr": "10.0.0.2", 00:23:01.391 "adrfam": "ipv4", 00:23:01.391 "trsvcid": "4420", 00:23:01.391 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:01.391 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:01.391 "hdgst": false, 00:23:01.391 "ddgst": false 00:23:01.391 }, 00:23:01.391 "method": "bdev_nvme_attach_controller" 00:23:01.391 },{ 00:23:01.391 "params": { 00:23:01.391 "name": "Nvme10", 00:23:01.391 "trtype": "tcp", 00:23:01.391 "traddr": "10.0.0.2", 00:23:01.391 "adrfam": "ipv4", 00:23:01.391 "trsvcid": "4420", 00:23:01.391 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:01.391 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:01.391 "hdgst": false, 00:23:01.391 "ddgst": false 00:23:01.391 }, 00:23:01.391 "method": "bdev_nvme_attach_controller" 00:23:01.391 }' 00:23:01.391 [2024-07-12 11:01:18.177772] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.391 [2024-07-12 11:01:18.241997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.815 Running I/O for 10 seconds... 00:23:02.815 11:01:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:02.815 11:01:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:02.815 11:01:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:02.815 11:01:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.815 11:01:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:03.074 11:01:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.074 11:01:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:03.074 11:01:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:03.074 11:01:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:03.074 11:01:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:23:03.074 11:01:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:23:03.074 11:01:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:03.074 11:01:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:03.074 11:01:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:03.074 11:01:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:03.074 11:01:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.074 11:01:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:03.074 11:01:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.074 11:01:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:03.074 11:01:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:03.074 11:01:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:03.333 11:01:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:03.333 11:01:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:03.333 11:01:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:03.333 11:01:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:03.333 11:01:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.333 11:01:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:03.333 11:01:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.333 11:01:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:03.333 11:01:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:03.333 11:01:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:03.592 11:01:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:03.592 11:01:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:03.592 11:01:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:03.592 11:01:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:03.592 11:01:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.592 11:01:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:03.592 11:01:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.592 11:01:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:03.592 11:01:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:03.592 11:01:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:23:03.592 11:01:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:23:03.592 11:01:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:23:03.592 11:01:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2170600 00:23:03.592 11:01:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2170600 ']' 00:23:03.592 11:01:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2170600 00:23:03.592 11:01:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:03.592 11:01:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:03.852 11:01:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2170600 00:23:03.852 11:01:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:03.852 11:01:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:03.852 11:01:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2170600' 00:23:03.852 killing process with pid 2170600 00:23:03.852 11:01:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2170600 00:23:03.852 11:01:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2170600 00:23:03.852 Received shutdown signal, test time was about 0.979988 seconds 00:23:03.852 00:23:03.852 Latency(us) 00:23:03.852 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.852 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.852 Verification LBA range: start 0x0 length 0x400 00:23:03.852 Nvme1n1 : 0.98 262.30 16.39 0.00 0.00 240940.80 32112.64 232434.35 00:23:03.852 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.852 Verification LBA range: start 0x0 length 0x400 00:23:03.852 Nvme2n1 : 0.97 265.12 16.57 0.00 0.00 233687.25 18568.53 242920.11 00:23:03.852 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.852 Verification LBA range: start 0x0 length 0x400 00:23:03.852 Nvme3n1 : 0.98 261.47 16.34 0.00 0.00 232120.32 17694.72 249910.61 00:23:03.852 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.852 Verification LBA range: start 0x0 length 0x400 00:23:03.852 Nvme4n1 : 0.95 270.14 16.88 0.00 0.00 219211.73 11031.89 249910.61 00:23:03.852 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.852 Verification LBA range: start 0x0 length 0x400 00:23:03.852 Nvme5n1 : 0.95 202.27 12.64 0.00 0.00 286169.32 21845.33 253405.87 00:23:03.852 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.852 Verification LBA range: start 0x0 length 0x400 00:23:03.852 Nvme6n1 : 0.95 203.13 12.70 0.00 0.00 279019.52 19988.48 246415.36 00:23:03.852 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.852 Verification LBA range: start 0x0 length 0x400 00:23:03.852 Nvme7n1 : 0.98 262.55 16.41 0.00 0.00 211613.44 32768.00 218453.33 00:23:03.852 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.852 Verification LBA range: start 0x0 length 0x400 00:23:03.852 Nvme8n1 : 0.97 264.18 16.51 0.00 0.00 205570.13 20862.29 255153.49 00:23:03.852 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.852 Verification LBA range: start 0x0 length 0x400 00:23:03.852 Nvme9n1 : 0.96 200.54 12.53 0.00 0.00 263737.74 19660.80 249910.61 00:23:03.852 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.852 Verification LBA range: start 0x0 length 0x400 00:23:03.852 Nvme10n1 : 0.96 199.38 12.46 0.00 0.00 259110.97 22282.24 272629.76 00:23:03.852 =================================================================================================================== 00:23:03.852 Total : 2391.08 149.44 0.00 0.00 239907.98 11031.89 272629.76 00:23:04.112 11:01:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:23:05.051 11:01:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2170222 00:23:05.051 11:01:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:23:05.051 11:01:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:05.051 11:01:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:05.051 11:01:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:05.051 11:01:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:05.051 11:01:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:05.051 11:01:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:23:05.051 11:01:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:05.051 11:01:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:23:05.051 11:01:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:05.051 11:01:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:05.051 rmmod nvme_tcp 00:23:05.051 rmmod nvme_fabrics 00:23:05.051 rmmod nvme_keyring 00:23:05.051 11:01:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:05.051 11:01:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:23:05.051 11:01:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:23:05.051 11:01:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2170222 ']' 00:23:05.051 11:01:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2170222 00:23:05.051 11:01:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2170222 ']' 00:23:05.051 11:01:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2170222 00:23:05.051 11:01:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:05.051 11:01:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:05.051 11:01:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2170222 00:23:05.051 11:01:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:05.051 11:01:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:05.051 11:01:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2170222' 00:23:05.051 killing process with pid 2170222 00:23:05.051 11:01:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2170222 00:23:05.051 11:01:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2170222 00:23:05.311 11:01:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:05.311 11:01:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:05.311 11:01:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:05.311 11:01:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:05.311 11:01:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:05.311 11:01:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.311 11:01:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:05.311 11:01:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:07.869 00:23:07.869 real 0m7.981s 00:23:07.869 user 0m24.175s 00:23:07.869 sys 0m1.277s 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:07.869 ************************************ 00:23:07.869 END TEST nvmf_shutdown_tc2 00:23:07.869 ************************************ 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:07.869 ************************************ 00:23:07.869 START TEST nvmf_shutdown_tc3 00:23:07.869 ************************************ 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:07.869 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:07.869 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:07.869 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:07.869 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:07.869 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:07.869 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.574 ms 00:23:07.869 00:23:07.869 --- 10.0.0.2 ping statistics --- 00:23:07.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.869 rtt min/avg/max/mdev = 0.574/0.574/0.574/0.000 ms 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:07.869 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:07.869 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:23:07.869 00:23:07.869 --- 10.0.0.1 ping statistics --- 00:23:07.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.869 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:07.869 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:23:07.870 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:07.870 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:07.870 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:07.870 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:07.870 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:07.870 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:07.870 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:07.870 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:07.870 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:07.870 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:07.870 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:07.870 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2172040 00:23:07.870 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2172040 00:23:07.870 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2172040 ']' 00:23:07.870 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:07.870 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.870 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:07.870 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.870 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:07.870 11:01:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:07.870 [2024-07-12 11:01:24.829454] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:07.870 [2024-07-12 11:01:24.829518] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:08.137 EAL: No free 2048 kB hugepages reported on node 1 00:23:08.137 [2024-07-12 11:01:24.915766] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:08.137 [2024-07-12 11:01:24.976795] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:08.137 [2024-07-12 11:01:24.976829] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:08.137 [2024-07-12 11:01:24.976834] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:08.137 [2024-07-12 11:01:24.976839] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:08.137 [2024-07-12 11:01:24.976842] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:08.137 [2024-07-12 11:01:24.977023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.137 [2024-07-12 11:01:24.977176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:08.137 [2024-07-12 11:01:24.977332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:08.137 [2024-07-12 11:01:24.977334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:08.707 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:08.707 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:08.707 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:08.707 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:08.707 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:08.707 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.707 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:08.707 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.707 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:08.707 [2024-07-12 11:01:25.652269] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:08.707 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.707 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:08.707 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:08.707 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:08.707 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:08.707 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:08.707 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:08.707 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:08.707 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:08.707 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:08.707 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:08.707 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:08.707 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:08.707 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:08.707 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:08.707 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:08.967 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:08.967 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:08.967 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:08.967 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:08.967 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:08.967 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:08.967 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:08.967 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:08.967 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:08.967 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:08.967 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:08.967 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.967 11:01:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:08.967 Malloc1 00:23:08.967 [2024-07-12 11:01:25.750932] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:08.967 Malloc2 00:23:08.967 Malloc3 00:23:08.967 Malloc4 00:23:08.967 Malloc5 00:23:08.967 Malloc6 00:23:09.226 Malloc7 00:23:09.226 Malloc8 00:23:09.226 Malloc9 00:23:09.226 Malloc10 00:23:09.226 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.226 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:09.226 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:09.226 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:09.226 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2172261 00:23:09.226 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2172261 /var/tmp/bdevperf.sock 00:23:09.226 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2172261 ']' 00:23:09.226 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:09.226 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:09.226 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:09.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:09.226 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:09.227 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:09.227 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:09.227 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:09.227 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:23:09.227 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:23:09.227 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.227 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.227 { 00:23:09.227 "params": { 00:23:09.227 "name": "Nvme$subsystem", 00:23:09.227 "trtype": "$TEST_TRANSPORT", 00:23:09.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.227 "adrfam": "ipv4", 00:23:09.227 "trsvcid": "$NVMF_PORT", 00:23:09.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.227 "hdgst": ${hdgst:-false}, 00:23:09.227 "ddgst": ${ddgst:-false} 00:23:09.227 }, 00:23:09.227 "method": "bdev_nvme_attach_controller" 00:23:09.227 } 00:23:09.227 EOF 00:23:09.227 )") 00:23:09.227 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:09.227 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.227 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.227 { 00:23:09.227 "params": { 00:23:09.227 "name": "Nvme$subsystem", 00:23:09.227 "trtype": "$TEST_TRANSPORT", 00:23:09.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.227 "adrfam": "ipv4", 00:23:09.227 "trsvcid": "$NVMF_PORT", 00:23:09.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.227 "hdgst": ${hdgst:-false}, 00:23:09.227 "ddgst": ${ddgst:-false} 00:23:09.227 }, 00:23:09.227 "method": "bdev_nvme_attach_controller" 00:23:09.227 } 00:23:09.227 EOF 00:23:09.227 )") 00:23:09.227 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:09.227 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.227 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.227 { 00:23:09.227 "params": { 00:23:09.227 "name": "Nvme$subsystem", 00:23:09.227 "trtype": "$TEST_TRANSPORT", 00:23:09.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.227 "adrfam": "ipv4", 00:23:09.227 "trsvcid": "$NVMF_PORT", 00:23:09.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.227 "hdgst": ${hdgst:-false}, 00:23:09.227 "ddgst": ${ddgst:-false} 00:23:09.227 }, 00:23:09.227 "method": "bdev_nvme_attach_controller" 00:23:09.227 } 00:23:09.227 EOF 00:23:09.227 )") 00:23:09.227 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:09.227 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.227 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.227 { 00:23:09.227 "params": { 00:23:09.227 "name": "Nvme$subsystem", 00:23:09.227 "trtype": "$TEST_TRANSPORT", 00:23:09.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.227 "adrfam": "ipv4", 00:23:09.227 "trsvcid": "$NVMF_PORT", 00:23:09.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.227 "hdgst": ${hdgst:-false}, 00:23:09.227 "ddgst": ${ddgst:-false} 00:23:09.227 }, 00:23:09.227 "method": "bdev_nvme_attach_controller" 00:23:09.227 } 00:23:09.227 EOF 00:23:09.227 )") 00:23:09.227 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:09.227 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.227 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.227 { 00:23:09.227 "params": { 00:23:09.227 "name": "Nvme$subsystem", 00:23:09.227 "trtype": "$TEST_TRANSPORT", 00:23:09.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.227 "adrfam": "ipv4", 00:23:09.227 "trsvcid": "$NVMF_PORT", 00:23:09.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.227 "hdgst": ${hdgst:-false}, 00:23:09.227 "ddgst": ${ddgst:-false} 00:23:09.227 }, 00:23:09.227 "method": "bdev_nvme_attach_controller" 00:23:09.227 } 00:23:09.227 EOF 00:23:09.227 )") 00:23:09.227 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:09.227 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.227 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.227 { 00:23:09.227 "params": { 00:23:09.227 "name": "Nvme$subsystem", 00:23:09.227 "trtype": "$TEST_TRANSPORT", 00:23:09.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.227 "adrfam": "ipv4", 00:23:09.227 "trsvcid": "$NVMF_PORT", 00:23:09.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.227 "hdgst": ${hdgst:-false}, 00:23:09.227 "ddgst": ${ddgst:-false} 00:23:09.227 }, 00:23:09.227 "method": "bdev_nvme_attach_controller" 00:23:09.227 } 00:23:09.227 EOF 00:23:09.227 )") 00:23:09.227 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:09.227 [2024-07-12 11:01:26.196742] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:09.227 [2024-07-12 11:01:26.196798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2172261 ] 00:23:09.227 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.227 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.227 { 00:23:09.227 "params": { 00:23:09.227 "name": "Nvme$subsystem", 00:23:09.227 "trtype": "$TEST_TRANSPORT", 00:23:09.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.227 "adrfam": "ipv4", 00:23:09.227 "trsvcid": "$NVMF_PORT", 00:23:09.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.227 "hdgst": ${hdgst:-false}, 00:23:09.227 "ddgst": ${ddgst:-false} 00:23:09.227 }, 00:23:09.227 "method": "bdev_nvme_attach_controller" 00:23:09.227 } 00:23:09.227 EOF 00:23:09.227 )") 00:23:09.227 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:09.227 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.227 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.227 { 00:23:09.227 "params": { 00:23:09.227 "name": "Nvme$subsystem", 00:23:09.227 "trtype": "$TEST_TRANSPORT", 00:23:09.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.227 "adrfam": "ipv4", 00:23:09.227 "trsvcid": "$NVMF_PORT", 00:23:09.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.227 "hdgst": ${hdgst:-false}, 00:23:09.227 "ddgst": ${ddgst:-false} 00:23:09.227 }, 00:23:09.227 "method": "bdev_nvme_attach_controller" 00:23:09.227 } 00:23:09.227 EOF 00:23:09.227 )") 00:23:09.227 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:09.487 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.487 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.487 { 00:23:09.487 "params": { 00:23:09.487 "name": "Nvme$subsystem", 00:23:09.487 "trtype": "$TEST_TRANSPORT", 00:23:09.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.487 "adrfam": "ipv4", 00:23:09.487 "trsvcid": "$NVMF_PORT", 00:23:09.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.487 "hdgst": ${hdgst:-false}, 00:23:09.487 "ddgst": ${ddgst:-false} 00:23:09.487 }, 00:23:09.487 "method": "bdev_nvme_attach_controller" 00:23:09.487 } 00:23:09.487 EOF 00:23:09.487 )") 00:23:09.487 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:09.487 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.487 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.487 { 00:23:09.487 "params": { 00:23:09.487 "name": "Nvme$subsystem", 00:23:09.487 "trtype": "$TEST_TRANSPORT", 00:23:09.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.487 "adrfam": "ipv4", 00:23:09.487 "trsvcid": "$NVMF_PORT", 00:23:09.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.487 "hdgst": ${hdgst:-false}, 00:23:09.487 "ddgst": ${ddgst:-false} 00:23:09.487 }, 00:23:09.487 "method": "bdev_nvme_attach_controller" 00:23:09.487 } 00:23:09.487 EOF 00:23:09.487 )") 00:23:09.487 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.487 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:09.487 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:23:09.487 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:23:09.487 11:01:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:09.487 "params": { 00:23:09.487 "name": "Nvme1", 00:23:09.487 "trtype": "tcp", 00:23:09.487 "traddr": "10.0.0.2", 00:23:09.487 "adrfam": "ipv4", 00:23:09.487 "trsvcid": "4420", 00:23:09.487 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.487 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:09.487 "hdgst": false, 00:23:09.487 "ddgst": false 00:23:09.487 }, 00:23:09.487 "method": "bdev_nvme_attach_controller" 00:23:09.487 },{ 00:23:09.487 "params": { 00:23:09.487 "name": "Nvme2", 00:23:09.487 "trtype": "tcp", 00:23:09.487 "traddr": "10.0.0.2", 00:23:09.487 "adrfam": "ipv4", 00:23:09.487 "trsvcid": "4420", 00:23:09.487 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:09.487 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:09.487 "hdgst": false, 00:23:09.487 "ddgst": false 00:23:09.487 }, 00:23:09.487 "method": "bdev_nvme_attach_controller" 00:23:09.487 },{ 00:23:09.487 "params": { 00:23:09.487 "name": "Nvme3", 00:23:09.487 "trtype": "tcp", 00:23:09.487 "traddr": "10.0.0.2", 00:23:09.487 "adrfam": "ipv4", 00:23:09.487 "trsvcid": "4420", 00:23:09.487 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:09.487 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:09.487 "hdgst": false, 00:23:09.487 "ddgst": false 00:23:09.487 }, 00:23:09.487 "method": "bdev_nvme_attach_controller" 00:23:09.487 },{ 00:23:09.487 "params": { 00:23:09.487 "name": "Nvme4", 00:23:09.487 "trtype": "tcp", 00:23:09.487 "traddr": "10.0.0.2", 00:23:09.487 "adrfam": "ipv4", 00:23:09.487 "trsvcid": "4420", 00:23:09.487 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:09.487 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:09.487 "hdgst": false, 00:23:09.487 "ddgst": false 00:23:09.487 }, 00:23:09.487 "method": "bdev_nvme_attach_controller" 00:23:09.487 },{ 00:23:09.487 "params": { 00:23:09.487 "name": "Nvme5", 00:23:09.487 "trtype": "tcp", 00:23:09.487 "traddr": "10.0.0.2", 00:23:09.487 "adrfam": "ipv4", 00:23:09.487 "trsvcid": "4420", 00:23:09.487 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:09.487 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:09.487 "hdgst": false, 00:23:09.487 "ddgst": false 00:23:09.487 }, 00:23:09.487 "method": "bdev_nvme_attach_controller" 00:23:09.487 },{ 00:23:09.487 "params": { 00:23:09.487 "name": "Nvme6", 00:23:09.487 "trtype": "tcp", 00:23:09.487 "traddr": "10.0.0.2", 00:23:09.487 "adrfam": "ipv4", 00:23:09.487 "trsvcid": "4420", 00:23:09.487 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:09.487 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:09.487 "hdgst": false, 00:23:09.487 "ddgst": false 00:23:09.487 }, 00:23:09.487 "method": "bdev_nvme_attach_controller" 00:23:09.487 },{ 00:23:09.487 "params": { 00:23:09.487 "name": "Nvme7", 00:23:09.487 "trtype": "tcp", 00:23:09.487 "traddr": "10.0.0.2", 00:23:09.487 "adrfam": "ipv4", 00:23:09.487 "trsvcid": "4420", 00:23:09.487 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:09.487 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:09.487 "hdgst": false, 00:23:09.487 "ddgst": false 00:23:09.487 }, 00:23:09.487 "method": "bdev_nvme_attach_controller" 00:23:09.487 },{ 00:23:09.487 "params": { 00:23:09.487 "name": "Nvme8", 00:23:09.487 "trtype": "tcp", 00:23:09.487 "traddr": "10.0.0.2", 00:23:09.487 "adrfam": "ipv4", 00:23:09.487 "trsvcid": "4420", 00:23:09.487 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:09.487 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:09.487 "hdgst": false, 00:23:09.487 "ddgst": false 00:23:09.487 }, 00:23:09.487 "method": "bdev_nvme_attach_controller" 00:23:09.487 },{ 00:23:09.487 "params": { 00:23:09.487 "name": "Nvme9", 00:23:09.487 "trtype": "tcp", 00:23:09.487 "traddr": "10.0.0.2", 00:23:09.487 "adrfam": "ipv4", 00:23:09.487 "trsvcid": "4420", 00:23:09.487 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:09.487 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:09.487 "hdgst": false, 00:23:09.487 "ddgst": false 00:23:09.487 }, 00:23:09.487 "method": "bdev_nvme_attach_controller" 00:23:09.487 },{ 00:23:09.487 "params": { 00:23:09.487 "name": "Nvme10", 00:23:09.487 "trtype": "tcp", 00:23:09.487 "traddr": "10.0.0.2", 00:23:09.487 "adrfam": "ipv4", 00:23:09.487 "trsvcid": "4420", 00:23:09.487 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:09.487 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:09.487 "hdgst": false, 00:23:09.487 "ddgst": false 00:23:09.487 }, 00:23:09.487 "method": "bdev_nvme_attach_controller" 00:23:09.487 }' 00:23:09.487 [2024-07-12 11:01:26.275628] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.487 [2024-07-12 11:01:26.340217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.870 Running I/O for 10 seconds... 00:23:11.130 11:01:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:11.130 11:01:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:11.130 11:01:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:11.130 11:01:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.130 11:01:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:11.130 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.130 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:11.130 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:11.130 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:11.130 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:11.130 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:23:11.130 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:23:11.130 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:11.130 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:11.130 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:11.130 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:11.130 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.130 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:11.130 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.130 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:11.130 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:11.130 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:11.390 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:11.390 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:11.390 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:11.390 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:11.390 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.390 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:11.650 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.650 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:11.650 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:11.650 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:11.925 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:11.925 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:11.925 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:11.925 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:11.925 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.925 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:11.925 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.925 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:11.925 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:11.925 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:23:11.925 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:23:11.925 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:23:11.925 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2172040 00:23:11.925 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 2172040 ']' 00:23:11.925 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 2172040 00:23:11.925 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:23:11.925 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:11.925 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2172040 00:23:11.925 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:11.925 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:11.925 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2172040' 00:23:11.925 killing process with pid 2172040 00:23:11.925 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 2172040 00:23:11.925 11:01:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 2172040 00:23:11.925 [2024-07-12 11:01:28.759132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759176] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759227] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759242] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759269] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759278] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759342] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759389] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759427] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759432] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759441] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759450] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.759473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03cf0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.762262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02b80 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.762274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02b80 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.762279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02b80 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763135] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763174] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763178] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763183] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763242] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763251] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763389] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763403] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.763416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03020 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.764153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a034e0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.764174] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a034e0 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.764724] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.764743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.764749] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.925 [2024-07-12 11:01:28.764754] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764789] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764793] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764837] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764874] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764883] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764887] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764892] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764944] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764963] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764980] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.764999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.765003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.765008] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.765013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.765017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.765022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.765026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.765030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.765035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.765039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39440 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766493] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766520] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766534] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766538] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766547] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766580] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766594] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766603] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766623] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766637] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766641] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766673] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766717] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766722] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766736] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766741] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766750] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.766755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c39da0 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767364] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767378] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767397] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767406] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767420] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767447] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767547] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.926 [2024-07-12 11:01:28.767604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.767608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.767613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.767617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.767622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.767627] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.767631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.767636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.767640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.767645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.767649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.767653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.767658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3a260 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768151] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768174] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768178] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768186] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768227] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768389] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.768394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03850 is same with the state(5) to be set 00:23:11.927 [2024-07-12 11:01:28.777842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.777881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-12 11:01:28.777899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.777908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-12 11:01:28.777918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.777925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-12 11:01:28.777935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.777943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-12 11:01:28.777961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.777969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-12 11:01:28.777978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.777986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-12 11:01:28.777995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.778003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-12 11:01:28.778012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.778019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-12 11:01:28.778029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.778037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-12 11:01:28.778046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.778053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-12 11:01:28.778062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.778071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-12 11:01:28.778080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.778087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-12 11:01:28.778097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.778104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-12 11:01:28.778113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.778121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-12 11:01:28.778136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.778143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-12 11:01:28.778153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.778161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-12 11:01:28.778170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.778179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-12 11:01:28.778190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.778197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-12 11:01:28.778207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.778214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-12 11:01:28.778224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.778231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-12 11:01:28.778241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.778247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-12 11:01:28.778257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.778264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-12 11:01:28.778274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.778281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-12 11:01:28.778290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.778297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-12 11:01:28.778306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.778313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-12 11:01:28.778323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.778330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-12 11:01:28.778339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.778346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-12 11:01:28.778355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.778362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-12 11:01:28.778372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.778380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-12 11:01:28.778391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.778398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-12 11:01:28.778407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.778414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-12 11:01:28.778424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.778430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-12 11:01:28.778440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.778447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-12 11:01:28.778456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.778463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-12 11:01:28.778473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.778480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-12 11:01:28.778489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.778496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-12 11:01:28.778506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.778513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.927 [2024-07-12 11:01:28.778522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.927 [2024-07-12 11:01:28.778530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.778539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.928 [2024-07-12 11:01:28.778546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.778555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.928 [2024-07-12 11:01:28.778562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.778572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.928 [2024-07-12 11:01:28.778579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.778588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.928 [2024-07-12 11:01:28.778597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.778606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.928 [2024-07-12 11:01:28.778613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.778622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.928 [2024-07-12 11:01:28.778630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.778639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.928 [2024-07-12 11:01:28.778646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.778655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.928 [2024-07-12 11:01:28.778662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.778672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.928 [2024-07-12 11:01:28.778679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.778688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.928 [2024-07-12 11:01:28.778695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.778704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.928 [2024-07-12 11:01:28.778711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.778721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.928 [2024-07-12 11:01:28.778728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.778737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.928 [2024-07-12 11:01:28.778744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.778754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.928 [2024-07-12 11:01:28.778762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.778771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.928 [2024-07-12 11:01:28.778778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.778788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.928 [2024-07-12 11:01:28.778795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.778805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.928 [2024-07-12 11:01:28.778813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.778822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.928 [2024-07-12 11:01:28.778829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.778838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.928 [2024-07-12 11:01:28.778845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.778855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.928 [2024-07-12 11:01:28.778862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.778871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.928 [2024-07-12 11:01:28.778878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.778887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.928 [2024-07-12 11:01:28.778895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.778904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.928 [2024-07-12 11:01:28.778911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.778920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.928 [2024-07-12 11:01:28.778927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.778937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.928 [2024-07-12 11:01:28.778944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.778953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.928 [2024-07-12 11:01:28.778960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.778990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:11.928 [2024-07-12 11:01:28.779036] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb60e70 was disconnected and freed. reset controller. 00:23:11.928 [2024-07-12 11:01:28.779294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.779314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.779323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.779334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.779342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.779349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.779357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.779364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.779373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc90990 is same with the state(5) to be set 00:23:11.928 [2024-07-12 11:01:28.779401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.779410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.779418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.779425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.779434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.779441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.779449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.779457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.779464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9030 is same with the state(5) to be set 00:23:11.928 [2024-07-12 11:01:28.779488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.779496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.779505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.779512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.779520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.779527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.779535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.779542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.779549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadf1b0 is same with the state(5) to be set 00:23:11.928 [2024-07-12 11:01:28.779575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.779583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.779594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.779602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.779609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.779616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.779624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.779632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.779639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60e340 is same with the state(5) to be set 00:23:11.928 [2024-07-12 11:01:28.779664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.779673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.779681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.779689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.779697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.779704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.779713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.779720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.779727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7e290 is same with the state(5) to be set 00:23:11.928 [2024-07-12 11:01:28.779752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.779760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.779769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.779776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.779784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.779791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.779799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.779806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.779813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d0c0 is same with the state(5) to be set 00:23:11.928 [2024-07-12 11:01:28.779840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.779849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.779858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.779865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.779874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.779881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.779889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.779896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.779903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf8ca0 is same with the state(5) to be set 00:23:11.928 [2024-07-12 11:01:28.779926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.779935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.779944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.779952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.779960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.779967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.779975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.779982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.779989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88210 is same with the state(5) to be set 00:23:11.928 [2024-07-12 11:01:28.780011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.780020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.780028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.780036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.780044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.780051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.780059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.780066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.780075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc59e90 is same with the state(5) to be set 00:23:11.928 [2024-07-12 11:01:28.780095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.780104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.780112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.780119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.780134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.780141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.780149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.928 [2024-07-12 11:01:28.780156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.780163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabc5d0 is same with the state(5) to be set 00:23:11.928 [2024-07-12 11:01:28.780197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.928 [2024-07-12 11:01:28.780206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.780217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.928 [2024-07-12 11:01:28.780225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.780235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.928 [2024-07-12 11:01:28.780242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.780251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.928 [2024-07-12 11:01:28.780259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.780269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.928 [2024-07-12 11:01:28.780276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.780285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.928 [2024-07-12 11:01:28.780293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.780303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.928 [2024-07-12 11:01:28.780310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.780320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.928 [2024-07-12 11:01:28.780329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.780339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.928 [2024-07-12 11:01:28.780346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.928 [2024-07-12 11:01:28.780356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.928 [2024-07-12 11:01:28.780363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.780380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.780396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.780413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.780430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.780446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.780463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.780480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.780496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.780512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.780529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.780549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.780566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.780582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.780599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.780616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.780632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.780649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.780665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.780682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.780699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.780716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.780732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.780751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.780767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.780784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.780801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.780818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.780834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.780851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.780867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.780884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.780901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.780917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.780934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.780950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.780968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.780985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.780994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.781001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.781011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.781018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.781027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.781034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.781044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.781051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.781060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.781067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.781077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.781085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.781094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.781101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.781111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.781118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.781132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.781139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.781149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.781156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.781165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.781174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.781184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.781191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.781200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.781208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.781217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.781224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.781233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.781240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.781249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.781257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.781267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.781274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.781324] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb604b0 was disconnected and freed. reset controller. 00:23:11.929 [2024-07-12 11:01:28.781355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.781363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.781374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.781382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.781392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.781399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.781408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.781416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.781425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.781432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.781442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.781454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.781464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.781471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.787974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.788007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.788020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.788029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.788040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.788048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.788057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.788064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.788074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.788081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.788091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.788098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.788108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.788115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.788131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.788138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.788148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.788155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.788164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.788171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.788181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.788187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.788202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.929 [2024-07-12 11:01:28.788209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.929 [2024-07-12 11:01:28.788218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.788945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.788952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.789020] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xbf74e0 was disconnected and freed. reset controller. 00:23:11.930 [2024-07-12 11:01:28.789186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.789199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.789214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.789221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.789230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.789238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.789248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.789258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.789268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.789275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.789284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.789291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.789301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.789308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.789317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.789324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.789333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.789341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.789350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.789357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.789366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.789373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.789382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.789389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.789398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.789405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.789414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.789421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.789430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.789437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.789446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.789453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.789465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.789472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.789481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.789488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.789497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.789504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.789514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.789521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.789530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.789537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.789546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.789553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.789563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.789570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.789579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.789586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.789595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.789603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.789612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.789618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.789627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.789635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.789644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.789651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.789660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.789669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.789678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.789685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.789694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.789701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.789710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.789718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.789727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.789733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.789742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.789749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.789758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.789765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.789774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.789781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.930 [2024-07-12 11:01:28.789790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.930 [2024-07-12 11:01:28.789797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.789807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.789814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.789823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.789830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.789839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.789846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.789855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.789862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.789873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.789880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.789889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.789896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.789905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.789912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.789921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.789928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.789937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.789944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.789953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.789960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.789969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.789976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.789985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.789992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.790001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.790008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.790017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.790025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.790034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.790041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.790050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.790057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.790066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.790075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.790084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.790091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.790100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.790107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.790116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.790129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.790138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.790145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.790155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.790162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.790171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.790178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.790187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.790194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.790203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.790211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.790220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.790226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.790236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.790243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.790291] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xab6530 was disconnected and freed. reset controller. 00:23:11.931 [2024-07-12 11:01:28.791645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.791665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.791679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.791691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.791702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.791711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.791722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.791731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.791742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.791750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.791761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.791770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.791780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.791789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.791800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.791809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.791819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.791827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.791836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.791844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.791853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.791860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.791869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.791876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.791886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.791893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.791903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.791911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.791920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.791929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.791938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.791945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.791955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.791962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.791971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.791978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.791988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.791994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.792004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.792011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.792020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.792027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.792036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.792043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.792052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.792059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.792069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.792076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.792085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.792092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.792101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.792108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.792117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.792136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.792147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.792155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.792164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.792171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.792180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.792188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.792197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.792204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.792213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.792220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.792229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.792237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.792246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.792253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.792262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.792269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.792279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.792286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.792295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.792302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.792311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.792318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.792327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.792334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.792344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.792352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.792361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.792368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.792378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.792385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.792394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.792401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.792410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.792417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.792426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.792434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.931 [2024-07-12 11:01:28.792443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.931 [2024-07-12 11:01:28.792450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.792459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.792467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.792476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.792483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.792492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.792499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.792508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.792515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.792525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.792532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.792541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.792548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.792558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.792565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.792574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.792582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.792591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.792598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.792607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.792614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.792624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.792631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.792640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.792647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.792656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.792663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.792672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.792679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.792688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.792695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.792704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.792711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.792721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.792728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.792737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.792743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.792805] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb64af0 was disconnected and freed. reset controller. 00:23:11.932 [2024-07-12 11:01:28.792909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc90990 (9): Bad file descriptor 00:23:11.932 [2024-07-12 11:01:28.792931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf9030 (9): Bad file descriptor 00:23:11.932 [2024-07-12 11:01:28.792949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadf1b0 (9): Bad file descriptor 00:23:11.932 [2024-07-12 11:01:28.792962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x60e340 (9): Bad file descriptor 00:23:11.932 [2024-07-12 11:01:28.792974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc7e290 (9): Bad file descriptor 00:23:11.932 [2024-07-12 11:01:28.792986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc7d0c0 (9): Bad file descriptor 00:23:11.932 [2024-07-12 11:01:28.792998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf8ca0 (9): Bad file descriptor 00:23:11.932 [2024-07-12 11:01:28.793010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc88210 (9): Bad file descriptor 00:23:11.932 [2024-07-12 11:01:28.793021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc59e90 (9): Bad file descriptor 00:23:11.932 [2024-07-12 11:01:28.793033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabc5d0 (9): Bad file descriptor 00:23:11.932 [2024-07-12 11:01:28.797973] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:11.932 [2024-07-12 11:01:28.798009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.932 [2024-07-12 11:01:28.798574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:11.932 [2024-07-12 11:01:28.798598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:11.932 [2024-07-12 11:01:28.798869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.932 [2024-07-12 11:01:28.798884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc59e90 with addr=10.0.0.2, port=4420 00:23:11.932 [2024-07-12 11:01:28.798893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc59e90 is same with the state(5) to be set 00:23:11.932 [2024-07-12 11:01:28.798977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.932 [2024-07-12 11:01:28.798986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabc5d0 with addr=10.0.0.2, port=4420 00:23:11.932 [2024-07-12 11:01:28.798993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabc5d0 is same with the state(5) to be set 00:23:11.932 [2024-07-12 11:01:28.799567] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:11.932 [2024-07-12 11:01:28.799876] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:11.932 [2024-07-12 11:01:28.799927] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:11.932 [2024-07-12 11:01:28.799963] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:11.932 [2024-07-12 11:01:28.800272] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:11.932 [2024-07-12 11:01:28.800286] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:11.932 [2024-07-12 11:01:28.800714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.932 [2024-07-12 11:01:28.800726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc88210 with addr=10.0.0.2, port=4420 00:23:11.932 [2024-07-12 11:01:28.800734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88210 is same with the state(5) to be set 00:23:11.932 [2024-07-12 11:01:28.801001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.932 [2024-07-12 11:01:28.801010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf8ca0 with addr=10.0.0.2, port=4420 00:23:11.932 [2024-07-12 11:01:28.801022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf8ca0 is same with the state(5) to be set 00:23:11.932 [2024-07-12 11:01:28.801032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc59e90 (9): Bad file descriptor 00:23:11.932 [2024-07-12 11:01:28.801042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabc5d0 (9): Bad file descriptor 00:23:11.932 [2024-07-12 11:01:28.801453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.932 [2024-07-12 11:01:28.801467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadf1b0 with addr=10.0.0.2, port=4420 00:23:11.932 [2024-07-12 11:01:28.801474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadf1b0 is same with the state(5) to be set 00:23:11.932 [2024-07-12 11:01:28.801482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc88210 (9): Bad file descriptor 00:23:11.932 [2024-07-12 11:01:28.801492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf8ca0 (9): Bad file descriptor 00:23:11.932 [2024-07-12 11:01:28.801500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:11.932 [2024-07-12 11:01:28.801506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:11.932 [2024-07-12 11:01:28.801515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:11.932 [2024-07-12 11:01:28.801528] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.932 [2024-07-12 11:01:28.801534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.932 [2024-07-12 11:01:28.801540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.932 [2024-07-12 11:01:28.801597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.932 [2024-07-12 11:01:28.801605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.932 [2024-07-12 11:01:28.801612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadf1b0 (9): Bad file descriptor 00:23:11.932 [2024-07-12 11:01:28.801620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:11.932 [2024-07-12 11:01:28.801626] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:11.932 [2024-07-12 11:01:28.801633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:11.932 [2024-07-12 11:01:28.801643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:11.932 [2024-07-12 11:01:28.801649] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:11.932 [2024-07-12 11:01:28.801655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:11.932 [2024-07-12 11:01:28.801689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.932 [2024-07-12 11:01:28.801696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.932 [2024-07-12 11:01:28.801702] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:11.932 [2024-07-12 11:01:28.801708] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:11.932 [2024-07-12 11:01:28.801714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:11.932 [2024-07-12 11:01:28.801750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.932 [2024-07-12 11:01:28.803007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.803023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.803036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.803043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.803053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.803060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.803069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.803076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.803086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.803093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.803102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.803109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.803119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.803132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.803141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.803148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.803157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.803165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.803174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.803181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.803190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.803198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.803207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.803214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.803223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.803231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.803242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.803249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.803258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.803266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.803275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.803282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.803291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.803298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.803308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.803315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.803324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.803331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.803340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.803347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.803357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.803364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.803373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.803380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.803389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.803397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.803406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.803413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.803422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.803430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.803439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.803448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.803457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.803464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.803473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.803480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.803489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.803497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.803506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.803513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.803522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.803530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.803539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.803546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.803555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.803562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.803571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.803579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.803588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.803595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.803604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.803612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.803621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.932 [2024-07-12 11:01:28.803628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.932 [2024-07-12 11:01:28.803637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.803645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.803656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.803663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.803672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.803679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.803689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.803696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.803705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.803712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.803722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.803730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.803739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.803746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.803756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.803763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.803772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.803779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.803789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.803796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.803805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.803812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.803822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.803829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.803838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.803845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.803854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.803863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.803873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.803880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.803889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.803896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.803905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.803912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.803922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.803929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.803938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.803946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.803955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.803962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.803971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.803978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.803988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.803995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.804004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.804011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.804020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.804028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.804037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.804044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.804053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.804061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.804071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.804079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.804087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8970 is same with the state(5) to be set 00:23:11.933 [2024-07-12 11:01:28.805376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.805390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.805402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.805411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.805422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.805431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.805442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.805451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.805462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.805471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.805481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.805490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.805501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.805508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.805517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.805524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.805533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.805540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.805549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.805556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.805565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.805571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.805583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.805590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.805600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.805606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.805615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.805622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.805632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.805639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.805648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.805655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.805665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.805672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.805681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.805688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.805698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.805704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.805714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.805721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.805731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.805738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.805747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.805754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.805763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.805771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.805780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.805788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.805798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.805805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.805814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.805821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.805831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.805838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.805847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.805854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.805863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.805871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.805880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.805887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.805896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.805903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.805913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.805920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.805929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.805936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.805945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.805953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.805963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.805970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.805979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.805986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.805997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.806004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.806014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.806021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.806030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.806037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.806047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.806054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.806063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.806070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.806080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.806087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.806096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.806103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.806113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.806120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.806132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.806140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.806149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.806156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.806165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.806173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.806182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.933 [2024-07-12 11:01:28.806190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.933 [2024-07-12 11:01:28.806199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.806208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.806217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.806224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.806234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.806242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.806251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.806258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.806268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.806275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.806284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.806291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.806301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.806308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.806317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.806324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.806333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.806340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.806349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.806356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.806365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.806372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.806381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.806389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.806397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.806405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.806415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.806422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.806432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.806439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.806448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.806455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.806463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab79c0 is same with the state(5) to be set 00:23:11.934 [2024-07-12 11:01:28.807728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.807740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.807750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.807758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.807767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.807774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.807784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.807791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.807800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.807807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.807817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.807824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.807833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.807840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.807850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.807857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.807866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.807873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.807883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.807892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.807902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.807909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.807918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.807926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.807935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.807942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.807952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.807959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.807968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.807975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.807984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.807992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.934 [2024-07-12 11:01:28.808794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.934 [2024-07-12 11:01:28.808802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62300 is same with the state(5) to be set 00:23:11.935 [2024-07-12 11:01:28.810069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.810990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.810997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.811006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.811013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.811022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.811030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.811039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.811048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.811057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.811064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.811074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.811081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.811090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.811097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.811106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.811113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.811129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.811136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.811145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.811152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.811160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63770 is same with the state(5) to be set 00:23:11.935 [2024-07-12 11:01:28.814109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.814136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.814148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.814155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.814165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.814172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.814181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.814188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.814197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.814204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.814213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.814224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.814234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.814241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.814250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.814257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.814266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.814273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.814282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.814289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.814299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.814305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.814315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.814322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.814331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.814338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.814348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.814354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.814364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.814371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.814380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.814387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.935 [2024-07-12 11:01:28.814396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.935 [2024-07-12 11:01:28.814403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.814412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.814420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.814430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.814438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.814447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.814454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.814463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.814470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.814479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.814486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.814495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.814502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.814512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.814519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.814528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.814535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.814544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.814551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.814560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.814567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.814577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.814584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.814593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.814600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.814609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.814616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.814625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.814634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.814643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.814650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.814660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.814667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.814676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.814683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.814692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.814699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.814708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.814716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.814724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.814732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.814741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.814748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.814757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.814764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.814773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.814780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.814789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.814797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.814806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.814813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.814822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.814829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.814840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.814847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.814856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.814864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.814873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.814880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.814889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.814896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.814906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.814913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.814922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.814929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.814939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.814946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.814955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.814962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.814972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.814979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.814988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.814996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.815006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.815013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.815023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.815030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.815040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.815048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.815058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.815065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.815075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.815082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.815091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.815098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.815107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.815114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.815129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.815137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.815146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.815153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.815162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.815169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.815178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.936 [2024-07-12 11:01:28.815185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.936 [2024-07-12 11:01:28.815193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb65f60 is same with the state(5) to be set 00:23:11.936 [2024-07-12 11:01:28.816806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:11.936 [2024-07-12 11:01:28.816828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:11.936 [2024-07-12 11:01:28.816837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:11.936 [2024-07-12 11:01:28.816848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:11.936 [2024-07-12 11:01:28.816928] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:11.936 task offset: 26880 on job bdev=Nvme6n1 fails 00:23:11.936 00:23:11.936 Latency(us) 00:23:11.936 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.936 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.936 Job: Nvme1n1 ended in about 0.95 seconds with error 00:23:11.936 Verification LBA range: start 0x0 length 0x400 00:23:11.936 Nvme1n1 : 0.95 202.68 12.67 67.56 0.00 234106.88 16493.23 258648.75 00:23:11.936 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.936 Job: Nvme2n1 ended in about 0.95 seconds with error 00:23:11.936 Verification LBA range: start 0x0 length 0x400 00:23:11.936 Nvme2n1 : 0.95 202.42 12.65 67.47 0.00 229537.71 17257.81 225443.84 00:23:11.936 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.936 Job: Nvme3n1 ended in about 0.96 seconds with error 00:23:11.937 Verification LBA range: start 0x0 length 0x400 00:23:11.937 Nvme3n1 : 0.96 200.34 12.52 66.78 0.00 227189.33 19551.57 239424.85 00:23:11.937 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.937 Job: Nvme4n1 ended in about 0.95 seconds with error 00:23:11.937 Verification LBA range: start 0x0 length 0x400 00:23:11.937 Nvme4n1 : 0.95 202.17 12.64 67.39 0.00 220210.56 18131.63 242920.11 00:23:11.937 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.937 Job: Nvme5n1 ended in about 0.96 seconds with error 00:23:11.937 Verification LBA range: start 0x0 length 0x400 00:23:11.937 Nvme5n1 : 0.96 133.23 8.33 66.61 0.00 290908.73 37355.52 239424.85 00:23:11.937 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.937 Job: Nvme6n1 ended in about 0.94 seconds with error 00:23:11.937 Verification LBA range: start 0x0 length 0x400 00:23:11.937 Nvme6n1 : 0.94 203.25 12.70 67.75 0.00 209266.45 12834.13 267386.88 00:23:11.937 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.937 Job: Nvme7n1 ended in about 0.96 seconds with error 00:23:11.937 Verification LBA range: start 0x0 length 0x400 00:23:11.937 Nvme7n1 : 0.96 132.91 8.31 66.45 0.00 278828.66 22828.37 253405.87 00:23:11.937 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.937 Job: Nvme8n1 ended in about 0.97 seconds with error 00:23:11.937 Verification LBA range: start 0x0 length 0x400 00:23:11.937 Nvme8n1 : 0.97 198.87 12.43 66.29 0.00 204732.37 11741.87 256901.12 00:23:11.937 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.937 Job: Nvme9n1 ended in about 0.95 seconds with error 00:23:11.937 Verification LBA range: start 0x0 length 0x400 00:23:11.937 Nvme9n1 : 0.95 134.59 8.41 67.30 0.00 261897.96 17803.95 281367.89 00:23:11.937 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.937 Job: Nvme10n1 ended in about 0.97 seconds with error 00:23:11.937 Verification LBA range: start 0x0 length 0x400 00:23:11.937 Nvme10n1 : 0.97 142.35 8.90 66.02 0.00 248729.67 15400.96 251658.24 00:23:11.937 =================================================================================================================== 00:23:11.937 Total : 1752.81 109.55 669.63 0.00 237307.03 11741.87 281367.89 00:23:11.937 [2024-07-12 11:01:28.843857] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:11.937 [2024-07-12 11:01:28.843905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:11.937 [2024-07-12 11:01:28.844424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.937 [2024-07-12 11:01:28.844444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc90990 with addr=10.0.0.2, port=4420 00:23:11.937 [2024-07-12 11:01:28.844454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc90990 is same with the state(5) to be set 00:23:11.937 [2024-07-12 11:01:28.844718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.937 [2024-07-12 11:01:28.844727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf9030 with addr=10.0.0.2, port=4420 00:23:11.937 [2024-07-12 11:01:28.844735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9030 is same with the state(5) to be set 00:23:11.937 [2024-07-12 11:01:28.845003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.937 [2024-07-12 11:01:28.845012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x60e340 with addr=10.0.0.2, port=4420 00:23:11.937 [2024-07-12 11:01:28.845026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60e340 is same with the state(5) to be set 00:23:11.937 [2024-07-12 11:01:28.845246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.937 [2024-07-12 11:01:28.845256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc7e290 with addr=10.0.0.2, port=4420 00:23:11.937 [2024-07-12 11:01:28.845263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7e290 is same with the state(5) to be set 00:23:11.937 [2024-07-12 11:01:28.846655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.937 [2024-07-12 11:01:28.846669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:11.937 [2024-07-12 11:01:28.846678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:11.937 [2024-07-12 11:01:28.846687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:11.937 [2024-07-12 11:01:28.846698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:11.937 [2024-07-12 11:01:28.846970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.937 [2024-07-12 11:01:28.846982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc7d0c0 with addr=10.0.0.2, port=4420 00:23:11.937 [2024-07-12 11:01:28.846990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d0c0 is same with the state(5) to be set 00:23:11.937 [2024-07-12 11:01:28.847002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc90990 (9): Bad file descriptor 00:23:11.937 [2024-07-12 11:01:28.847013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf9030 (9): Bad file descriptor 00:23:11.937 [2024-07-12 11:01:28.847023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x60e340 (9): Bad file descriptor 00:23:11.937 [2024-07-12 11:01:28.847032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc7e290 (9): Bad file descriptor 00:23:11.937 [2024-07-12 11:01:28.847066] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:11.937 [2024-07-12 11:01:28.847078] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:11.937 [2024-07-12 11:01:28.847090] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:11.937 [2024-07-12 11:01:28.847101] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:11.937 [2024-07-12 11:01:28.847478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.937 [2024-07-12 11:01:28.847491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabc5d0 with addr=10.0.0.2, port=4420 00:23:11.937 [2024-07-12 11:01:28.847498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabc5d0 is same with the state(5) to be set 00:23:11.937 [2024-07-12 11:01:28.847902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.937 [2024-07-12 11:01:28.847912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc59e90 with addr=10.0.0.2, port=4420 00:23:11.937 [2024-07-12 11:01:28.847919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc59e90 is same with the state(5) to be set 00:23:11.937 [2024-07-12 11:01:28.848357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.937 [2024-07-12 11:01:28.848368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf8ca0 with addr=10.0.0.2, port=4420 00:23:11.937 [2024-07-12 11:01:28.848375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf8ca0 is same with the state(5) to be set 00:23:11.937 [2024-07-12 11:01:28.848706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.937 [2024-07-12 11:01:28.848715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc88210 with addr=10.0.0.2, port=4420 00:23:11.937 [2024-07-12 11:01:28.848722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88210 is same with the state(5) to be set 00:23:11.937 [2024-07-12 11:01:28.848994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.937 [2024-07-12 11:01:28.849005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadf1b0 with addr=10.0.0.2, port=4420 00:23:11.937 [2024-07-12 11:01:28.849012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadf1b0 is same with the state(5) to be set 00:23:11.937 [2024-07-12 11:01:28.849021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc7d0c0 (9): Bad file descriptor 00:23:11.937 [2024-07-12 11:01:28.849030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:11.937 [2024-07-12 11:01:28.849036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:11.937 [2024-07-12 11:01:28.849045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:11.937 [2024-07-12 11:01:28.849057] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:11.937 [2024-07-12 11:01:28.849063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:11.937 [2024-07-12 11:01:28.849070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:11.937 [2024-07-12 11:01:28.849081] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:11.937 [2024-07-12 11:01:28.849087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:11.937 [2024-07-12 11:01:28.849093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:11.937 [2024-07-12 11:01:28.849103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:11.937 [2024-07-12 11:01:28.849109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:11.937 [2024-07-12 11:01:28.849116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:11.937 [2024-07-12 11:01:28.849190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.937 [2024-07-12 11:01:28.849199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.937 [2024-07-12 11:01:28.849205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.937 [2024-07-12 11:01:28.849211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.937 [2024-07-12 11:01:28.849219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabc5d0 (9): Bad file descriptor 00:23:11.937 [2024-07-12 11:01:28.849228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc59e90 (9): Bad file descriptor 00:23:11.937 [2024-07-12 11:01:28.849237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf8ca0 (9): Bad file descriptor 00:23:11.937 [2024-07-12 11:01:28.849245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc88210 (9): Bad file descriptor 00:23:11.937 [2024-07-12 11:01:28.849255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadf1b0 (9): Bad file descriptor 00:23:11.937 [2024-07-12 11:01:28.849262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:11.937 [2024-07-12 11:01:28.849268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:11.937 [2024-07-12 11:01:28.849278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:11.937 [2024-07-12 11:01:28.849306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.937 [2024-07-12 11:01:28.849313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.937 [2024-07-12 11:01:28.849319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.937 [2024-07-12 11:01:28.849325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.937 [2024-07-12 11:01:28.849335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:11.937 [2024-07-12 11:01:28.849342] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:11.937 [2024-07-12 11:01:28.849349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:11.938 [2024-07-12 11:01:28.849358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:11.938 [2024-07-12 11:01:28.849364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:11.938 [2024-07-12 11:01:28.849371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:11.938 [2024-07-12 11:01:28.849380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:11.938 [2024-07-12 11:01:28.849386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:11.938 [2024-07-12 11:01:28.849393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:11.938 [2024-07-12 11:01:28.849402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:11.938 [2024-07-12 11:01:28.849408] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:11.938 [2024-07-12 11:01:28.849414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:11.938 [2024-07-12 11:01:28.849443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.938 [2024-07-12 11:01:28.849450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.938 [2024-07-12 11:01:28.849456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.938 [2024-07-12 11:01:28.849462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.938 [2024-07-12 11:01:28.849467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.198 11:01:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:23:12.198 11:01:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:23:13.141 11:01:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2172261 00:23:13.141 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2172261) - No such process 00:23:13.141 11:01:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:23:13.141 11:01:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:23:13.141 11:01:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:13.141 11:01:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:13.141 11:01:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:13.141 11:01:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:13.141 11:01:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:13.141 11:01:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:23:13.141 11:01:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:13.141 11:01:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:23:13.141 11:01:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:13.141 11:01:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:13.141 rmmod nvme_tcp 00:23:13.141 rmmod nvme_fabrics 00:23:13.141 rmmod nvme_keyring 00:23:13.141 11:01:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:13.141 11:01:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:23:13.141 11:01:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:23:13.141 11:01:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:13.141 11:01:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:13.141 11:01:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:13.141 11:01:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:13.141 11:01:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:13.141 11:01:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:13.141 11:01:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.141 11:01:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:13.141 11:01:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.687 11:01:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:15.687 00:23:15.687 real 0m7.802s 00:23:15.687 user 0m18.921s 00:23:15.687 sys 0m1.277s 00:23:15.687 11:01:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:15.687 11:01:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:15.687 ************************************ 00:23:15.687 END TEST nvmf_shutdown_tc3 00:23:15.687 ************************************ 00:23:15.687 11:01:32 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:15.687 11:01:32 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:23:15.687 00:23:15.687 real 0m32.953s 00:23:15.687 user 1m17.585s 00:23:15.687 sys 0m9.540s 00:23:15.687 11:01:32 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:15.687 11:01:32 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:15.687 ************************************ 00:23:15.687 END TEST nvmf_shutdown 00:23:15.687 ************************************ 00:23:15.687 11:01:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:15.687 11:01:32 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:23:15.687 11:01:32 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:15.687 11:01:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:15.687 11:01:32 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:23:15.687 11:01:32 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:15.687 11:01:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:15.687 11:01:32 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:23:15.687 11:01:32 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:15.687 11:01:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:15.687 11:01:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:15.687 11:01:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:15.687 ************************************ 00:23:15.687 START TEST nvmf_multicontroller 00:23:15.687 ************************************ 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:15.688 * Looking for test storage... 00:23:15.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:23:15.688 11:01:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:23.850 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:23.850 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.850 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:23.851 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:23.851 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:23.851 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:23.851 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:23:23.851 00:23:23.851 --- 10.0.0.2 ping statistics --- 00:23:23.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.851 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:23.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:23.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.362 ms 00:23:23.851 00:23:23.851 --- 10.0.0.1 ping statistics --- 00:23:23.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.851 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2177182 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2177182 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2177182 ']' 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:23.851 11:01:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.851 [2024-07-12 11:01:39.910679] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:23.851 [2024-07-12 11:01:39.910743] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:23.851 EAL: No free 2048 kB hugepages reported on node 1 00:23:23.851 [2024-07-12 11:01:39.996981] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:23.851 [2024-07-12 11:01:40.099882] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:23.851 [2024-07-12 11:01:40.099941] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:23.851 [2024-07-12 11:01:40.099949] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:23.851 [2024-07-12 11:01:40.099957] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:23.851 [2024-07-12 11:01:40.099962] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:23.851 [2024-07-12 11:01:40.100183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:23.851 [2024-07-12 11:01:40.100292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:23.851 [2024-07-12 11:01:40.100436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:23.851 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:23.851 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:23.851 11:01:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:23.851 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:23.851 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.851 11:01:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:23.851 11:01:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:23.851 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.851 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.851 [2024-07-12 11:01:40.767090] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:23.851 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.851 11:01:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:23.851 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.851 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.851 Malloc0 00:23:23.851 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.851 11:01:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:23.851 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.851 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.851 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.851 11:01:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:23.851 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.851 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.113 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.113 11:01:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:24.113 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.113 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.113 [2024-07-12 11:01:40.848973] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.113 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.113 11:01:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:24.113 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.113 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.113 [2024-07-12 11:01:40.860865] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:24.114 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.114 11:01:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:24.114 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.114 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.114 Malloc1 00:23:24.114 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.114 11:01:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:24.114 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.114 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.114 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.114 11:01:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:24.114 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.114 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.114 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.114 11:01:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:24.114 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.114 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.114 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.114 11:01:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:24.114 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.114 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.114 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.114 11:01:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2177497 00:23:24.114 11:01:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:24.114 11:01:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:24.114 11:01:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2177497 /var/tmp/bdevperf.sock 00:23:24.114 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2177497 ']' 00:23:24.114 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:24.114 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:24.114 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:24.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:24.114 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:24.114 11:01:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.070 NVMe0n1 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.070 1 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.070 request: 00:23:25.070 { 00:23:25.070 "name": "NVMe0", 00:23:25.070 "trtype": "tcp", 00:23:25.070 "traddr": "10.0.0.2", 00:23:25.070 "adrfam": "ipv4", 00:23:25.070 "trsvcid": "4420", 00:23:25.070 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.070 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:25.070 "hostaddr": "10.0.0.2", 00:23:25.070 "hostsvcid": "60000", 00:23:25.070 "prchk_reftag": false, 00:23:25.070 "prchk_guard": false, 00:23:25.070 "hdgst": false, 00:23:25.070 "ddgst": false, 00:23:25.070 "method": "bdev_nvme_attach_controller", 00:23:25.070 "req_id": 1 00:23:25.070 } 00:23:25.070 Got JSON-RPC error response 00:23:25.070 response: 00:23:25.070 { 00:23:25.070 "code": -114, 00:23:25.070 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:25.070 } 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.070 request: 00:23:25.070 { 00:23:25.070 "name": "NVMe0", 00:23:25.070 "trtype": "tcp", 00:23:25.070 "traddr": "10.0.0.2", 00:23:25.070 "adrfam": "ipv4", 00:23:25.070 "trsvcid": "4420", 00:23:25.070 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:25.070 "hostaddr": "10.0.0.2", 00:23:25.070 "hostsvcid": "60000", 00:23:25.070 "prchk_reftag": false, 00:23:25.070 "prchk_guard": false, 00:23:25.070 "hdgst": false, 00:23:25.070 "ddgst": false, 00:23:25.070 "method": "bdev_nvme_attach_controller", 00:23:25.070 "req_id": 1 00:23:25.070 } 00:23:25.070 Got JSON-RPC error response 00:23:25.070 response: 00:23:25.070 { 00:23:25.070 "code": -114, 00:23:25.070 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:25.070 } 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.070 request: 00:23:25.070 { 00:23:25.070 "name": "NVMe0", 00:23:25.070 "trtype": "tcp", 00:23:25.070 "traddr": "10.0.0.2", 00:23:25.070 "adrfam": "ipv4", 00:23:25.070 "trsvcid": "4420", 00:23:25.070 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.070 "hostaddr": "10.0.0.2", 00:23:25.070 "hostsvcid": "60000", 00:23:25.070 "prchk_reftag": false, 00:23:25.070 "prchk_guard": false, 00:23:25.070 "hdgst": false, 00:23:25.070 "ddgst": false, 00:23:25.070 "multipath": "disable", 00:23:25.070 "method": "bdev_nvme_attach_controller", 00:23:25.070 "req_id": 1 00:23:25.070 } 00:23:25.070 Got JSON-RPC error response 00:23:25.070 response: 00:23:25.070 { 00:23:25.070 "code": -114, 00:23:25.070 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:25.070 } 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:25.070 11:01:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:25.070 11:01:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:25.070 11:01:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:25.071 11:01:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.071 11:01:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.071 request: 00:23:25.071 { 00:23:25.071 "name": "NVMe0", 00:23:25.071 "trtype": "tcp", 00:23:25.071 "traddr": "10.0.0.2", 00:23:25.071 "adrfam": "ipv4", 00:23:25.071 "trsvcid": "4420", 00:23:25.071 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.071 "hostaddr": "10.0.0.2", 00:23:25.071 "hostsvcid": "60000", 00:23:25.071 "prchk_reftag": false, 00:23:25.071 "prchk_guard": false, 00:23:25.071 "hdgst": false, 00:23:25.071 "ddgst": false, 00:23:25.071 "multipath": "failover", 00:23:25.071 "method": "bdev_nvme_attach_controller", 00:23:25.071 "req_id": 1 00:23:25.071 } 00:23:25.071 Got JSON-RPC error response 00:23:25.071 response: 00:23:25.071 { 00:23:25.071 "code": -114, 00:23:25.071 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:25.071 } 00:23:25.071 11:01:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:25.071 11:01:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:25.071 11:01:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:25.071 11:01:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:25.071 11:01:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:25.071 11:01:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:25.071 11:01:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.071 11:01:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.338 00:23:25.338 11:01:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.338 11:01:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:25.338 11:01:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.338 11:01:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.338 11:01:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.338 11:01:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:25.338 11:01:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.338 11:01:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.600 00:23:25.600 11:01:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.600 11:01:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:25.600 11:01:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:25.600 11:01:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.600 11:01:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.600 11:01:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.600 11:01:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:25.600 11:01:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:26.556 0 00:23:26.556 11:01:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:26.556 11:01:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.556 11:01:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.556 11:01:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.556 11:01:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2177497 00:23:26.556 11:01:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2177497 ']' 00:23:26.556 11:01:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2177497 00:23:26.556 11:01:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:26.556 11:01:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:26.556 11:01:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2177497 00:23:26.817 11:01:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:26.817 11:01:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:26.817 11:01:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2177497' 00:23:26.817 killing process with pid 2177497 00:23:26.817 11:01:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2177497 00:23:26.817 11:01:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2177497 00:23:26.817 11:01:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:26.817 11:01:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.817 11:01:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.817 11:01:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.817 11:01:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:26.817 11:01:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.817 11:01:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.817 11:01:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.817 11:01:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:26.817 11:01:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:26.817 11:01:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:26.817 11:01:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:26.817 11:01:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:23:26.817 11:01:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:23:26.817 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:26.817 [2024-07-12 11:01:40.989608] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:26.817 [2024-07-12 11:01:40.989694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2177497 ] 00:23:26.817 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.817 [2024-07-12 11:01:41.071050] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.817 [2024-07-12 11:01:41.166465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.817 [2024-07-12 11:01:42.334408] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name ba94430f-b834-4f90-a814-7cfc9b2c0754 already exists 00:23:26.817 [2024-07-12 11:01:42.334454] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:ba94430f-b834-4f90-a814-7cfc9b2c0754 alias for bdev NVMe1n1 00:23:26.817 [2024-07-12 11:01:42.334463] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:26.817 Running I/O for 1 seconds... 00:23:26.817 00:23:26.817 Latency(us) 00:23:26.817 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.817 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:26.817 NVMe0n1 : 1.00 27224.39 106.35 0.00 0.00 4690.03 2826.24 10267.31 00:23:26.817 =================================================================================================================== 00:23:26.817 Total : 27224.39 106.35 0.00 0.00 4690.03 2826.24 10267.31 00:23:26.818 Received shutdown signal, test time was about 1.000000 seconds 00:23:26.818 00:23:26.818 Latency(us) 00:23:26.818 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.818 =================================================================================================================== 00:23:26.818 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:26.818 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:26.818 11:01:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:26.818 11:01:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:26.818 11:01:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:26.818 11:01:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:26.818 11:01:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:26.818 11:01:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:26.818 11:01:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:26.818 11:01:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:26.818 11:01:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:26.818 rmmod nvme_tcp 00:23:26.818 rmmod nvme_fabrics 00:23:26.818 rmmod nvme_keyring 00:23:26.818 11:01:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:26.818 11:01:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:26.818 11:01:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:26.818 11:01:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2177182 ']' 00:23:26.818 11:01:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2177182 00:23:26.818 11:01:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2177182 ']' 00:23:26.818 11:01:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2177182 00:23:27.078 11:01:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:27.078 11:01:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:27.078 11:01:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2177182 00:23:27.078 11:01:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:27.078 11:01:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:27.078 11:01:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2177182' 00:23:27.078 killing process with pid 2177182 00:23:27.078 11:01:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2177182 00:23:27.078 11:01:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2177182 00:23:27.078 11:01:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:27.078 11:01:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:27.078 11:01:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:27.078 11:01:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:27.078 11:01:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:27.078 11:01:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.078 11:01:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:27.078 11:01:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.637 11:01:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:29.637 00:23:29.637 real 0m13.750s 00:23:29.637 user 0m16.780s 00:23:29.637 sys 0m6.339s 00:23:29.637 11:01:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:29.638 11:01:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.638 ************************************ 00:23:29.638 END TEST nvmf_multicontroller 00:23:29.638 ************************************ 00:23:29.638 11:01:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:29.638 11:01:46 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:29.638 11:01:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:29.638 11:01:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:29.638 11:01:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:29.638 ************************************ 00:23:29.638 START TEST nvmf_aer 00:23:29.638 ************************************ 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:29.638 * Looking for test storage... 00:23:29.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:23:29.638 11:01:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:37.826 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:37.826 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:37.826 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:37.826 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:37.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:37.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.533 ms 00:23:37.826 00:23:37.826 --- 10.0.0.2 ping statistics --- 00:23:37.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.826 rtt min/avg/max/mdev = 0.533/0.533/0.533/0.000 ms 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:37.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:37.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:23:37.826 00:23:37.826 --- 10.0.0.1 ping statistics --- 00:23:37.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.826 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2182192 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2182192 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 2182192 ']' 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:37.826 11:01:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.826 [2024-07-12 11:01:53.703601] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:37.826 [2024-07-12 11:01:53.703672] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:37.826 EAL: No free 2048 kB hugepages reported on node 1 00:23:37.826 [2024-07-12 11:01:53.794032] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:37.826 [2024-07-12 11:01:53.892761] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:37.826 [2024-07-12 11:01:53.892822] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:37.826 [2024-07-12 11:01:53.892831] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:37.826 [2024-07-12 11:01:53.892838] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:37.826 [2024-07-12 11:01:53.892845] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:37.826 [2024-07-12 11:01:53.893004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.826 [2024-07-12 11:01:53.893173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:37.826 [2024-07-12 11:01:53.893273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.826 [2024-07-12 11:01:53.893274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:37.826 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:37.826 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:23:37.826 11:01:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:37.826 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:37.826 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.826 11:01:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:37.826 11:01:54 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:37.826 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.826 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.826 [2024-07-12 11:01:54.549283] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:37.826 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.826 11:01:54 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:37.826 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.826 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.826 Malloc0 00:23:37.826 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.826 11:01:54 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:37.826 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.826 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.826 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.826 11:01:54 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:37.826 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.826 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.826 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.826 11:01:54 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:37.826 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.826 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.826 [2024-07-12 11:01:54.615168] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:37.826 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.826 11:01:54 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:37.826 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.826 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.826 [ 00:23:37.826 { 00:23:37.826 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:37.826 "subtype": "Discovery", 00:23:37.826 "listen_addresses": [], 00:23:37.826 "allow_any_host": true, 00:23:37.826 "hosts": [] 00:23:37.826 }, 00:23:37.826 { 00:23:37.826 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.826 "subtype": "NVMe", 00:23:37.826 "listen_addresses": [ 00:23:37.826 { 00:23:37.826 "trtype": "TCP", 00:23:37.826 "adrfam": "IPv4", 00:23:37.826 "traddr": "10.0.0.2", 00:23:37.826 "trsvcid": "4420" 00:23:37.826 } 00:23:37.826 ], 00:23:37.826 "allow_any_host": true, 00:23:37.826 "hosts": [], 00:23:37.826 "serial_number": "SPDK00000000000001", 00:23:37.826 "model_number": "SPDK bdev Controller", 00:23:37.826 "max_namespaces": 2, 00:23:37.826 "min_cntlid": 1, 00:23:37.826 "max_cntlid": 65519, 00:23:37.826 "namespaces": [ 00:23:37.826 { 00:23:37.826 "nsid": 1, 00:23:37.826 "bdev_name": "Malloc0", 00:23:37.826 "name": "Malloc0", 00:23:37.826 "nguid": "55C6CE232EAB4CC6A7E69D3188E8FCB7", 00:23:37.826 "uuid": "55c6ce23-2eab-4cc6-a7e6-9d3188e8fcb7" 00:23:37.826 } 00:23:37.826 ] 00:23:37.826 } 00:23:37.826 ] 00:23:37.826 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.826 11:01:54 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:37.826 11:01:54 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:37.826 11:01:54 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=2182243 00:23:37.826 11:01:54 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:37.826 11:01:54 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:37.826 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:23:37.827 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:37.827 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:23:37.827 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:23:37.827 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:37.827 EAL: No free 2048 kB hugepages reported on node 1 00:23:37.827 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:37.827 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:23:37.827 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:23:37.827 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:38.087 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:38.087 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:38.087 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:23:38.087 11:01:54 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:38.087 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.087 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:38.087 Malloc1 00:23:38.087 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.087 11:01:54 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:38.087 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.087 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:38.087 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.087 11:01:54 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:38.087 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.087 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:38.087 [ 00:23:38.087 { 00:23:38.087 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:38.087 "subtype": "Discovery", 00:23:38.087 "listen_addresses": [], 00:23:38.087 "allow_any_host": true, 00:23:38.087 "hosts": [] 00:23:38.087 }, 00:23:38.087 { 00:23:38.087 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.087 "subtype": "NVMe", 00:23:38.087 "listen_addresses": [ 00:23:38.087 { 00:23:38.087 "trtype": "TCP", 00:23:38.087 "adrfam": "IPv4", 00:23:38.087 "traddr": "10.0.0.2", 00:23:38.087 "trsvcid": "4420" 00:23:38.087 } 00:23:38.087 ], 00:23:38.087 "allow_any_host": true, 00:23:38.087 "hosts": [], 00:23:38.087 "serial_number": "SPDK00000000000001", 00:23:38.087 "model_number": "SPDK bdev Controller", 00:23:38.087 "max_namespaces": 2, 00:23:38.087 "min_cntlid": 1, 00:23:38.087 "max_cntlid": 65519, 00:23:38.087 "namespaces": [ 00:23:38.087 { 00:23:38.087 "nsid": 1, 00:23:38.087 "bdev_name": "Malloc0", 00:23:38.087 "name": "Malloc0", 00:23:38.087 "nguid": "55C6CE232EAB4CC6A7E69D3188E8FCB7", 00:23:38.087 "uuid": "55c6ce23-2eab-4cc6-a7e6-9d3188e8fcb7" 00:23:38.087 }, 00:23:38.087 { 00:23:38.087 "nsid": 2, 00:23:38.087 "bdev_name": "Malloc1", 00:23:38.087 "name": "Malloc1", 00:23:38.087 "nguid": "3B5741F413C44B9E9D73807ADA40F1F4", 00:23:38.087 "uuid": "3b5741f4-13c4-4b9e-9d73-807ada40f1f4" 00:23:38.087 } 00:23:38.087 ] 00:23:38.087 } 00:23:38.087 ] 00:23:38.087 Asynchronous Event Request test 00:23:38.087 Attaching to 10.0.0.2 00:23:38.087 Attached to 10.0.0.2 00:23:38.087 Registering asynchronous event callbacks... 00:23:38.087 Starting namespace attribute notice tests for all controllers... 00:23:38.087 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:38.087 aer_cb - Changed Namespace 00:23:38.087 Cleaning up... 00:23:38.087 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.087 11:01:54 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 2182243 00:23:38.087 11:01:54 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:38.087 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.087 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:38.087 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.087 11:01:54 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:38.087 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.087 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:38.087 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.087 11:01:54 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:38.087 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.087 11:01:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:38.087 11:01:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.087 11:01:55 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:38.087 11:01:55 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:38.087 11:01:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:38.087 11:01:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:23:38.087 11:01:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:38.087 11:01:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:23:38.087 11:01:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:38.087 11:01:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:38.087 rmmod nvme_tcp 00:23:38.087 rmmod nvme_fabrics 00:23:38.087 rmmod nvme_keyring 00:23:38.348 11:01:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:38.348 11:01:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:23:38.348 11:01:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:23:38.348 11:01:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2182192 ']' 00:23:38.348 11:01:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2182192 00:23:38.348 11:01:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 2182192 ']' 00:23:38.348 11:01:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 2182192 00:23:38.348 11:01:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:23:38.348 11:01:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:38.348 11:01:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2182192 00:23:38.348 11:01:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:38.348 11:01:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:38.348 11:01:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2182192' 00:23:38.348 killing process with pid 2182192 00:23:38.348 11:01:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 2182192 00:23:38.348 11:01:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 2182192 00:23:38.348 11:01:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:38.348 11:01:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:38.348 11:01:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:38.348 11:01:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:38.349 11:01:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:38.349 11:01:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.349 11:01:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:38.349 11:01:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.910 11:01:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:40.910 00:23:40.910 real 0m11.206s 00:23:40.910 user 0m7.669s 00:23:40.910 sys 0m6.039s 00:23:40.910 11:01:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:40.910 11:01:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:40.910 ************************************ 00:23:40.910 END TEST nvmf_aer 00:23:40.910 ************************************ 00:23:40.910 11:01:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:40.910 11:01:57 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:40.910 11:01:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:40.910 11:01:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:40.910 11:01:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:40.910 ************************************ 00:23:40.910 START TEST nvmf_async_init 00:23:40.910 ************************************ 00:23:40.910 11:01:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:40.910 * Looking for test storage... 00:23:40.910 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:40.910 11:01:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:40.910 11:01:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:40.910 11:01:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:40.910 11:01:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:40.910 11:01:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:40.910 11:01:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:40.910 11:01:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:40.910 11:01:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:40.911 11:01:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:40.911 11:01:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:40.911 11:01:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:40.911 11:01:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:40.911 11:01:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:40.911 11:01:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:40.911 11:01:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:40.911 11:01:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:40.911 11:01:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:40.911 11:01:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:40.911 11:01:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:40.911 11:01:57 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:40.911 11:01:57 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:40.911 11:01:57 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:40.911 11:01:57 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.912 11:01:57 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.912 11:01:57 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.912 11:01:57 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:40.912 11:01:57 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.912 11:01:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:23:40.912 11:01:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:40.912 11:01:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:40.912 11:01:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:40.912 11:01:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:40.912 11:01:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:40.912 11:01:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:40.912 11:01:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:40.912 11:01:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:40.912 11:01:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:40.912 11:01:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:40.912 11:01:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:40.912 11:01:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:40.912 11:01:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:40.912 11:01:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:40.912 11:01:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=bab5fcf02b864703acda0fbd728af79b 00:23:40.912 11:01:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:40.912 11:01:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:40.912 11:01:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:40.913 11:01:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:40.913 11:01:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:40.913 11:01:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:40.913 11:01:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.913 11:01:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:40.913 11:01:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.913 11:01:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:40.913 11:01:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:40.913 11:01:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:23:40.913 11:01:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:49.061 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:49.061 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:23:49.061 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:49.061 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:49.061 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:49.061 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:49.061 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:49.061 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:23:49.061 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:49.061 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:23:49.061 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:23:49.061 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:23:49.061 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:23:49.061 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:49.062 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:49.062 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:49.062 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:49.062 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:49.062 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:49.062 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.437 ms 00:23:49.062 00:23:49.062 --- 10.0.0.2 ping statistics --- 00:23:49.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.062 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:49.062 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:49.062 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.377 ms 00:23:49.062 00:23:49.062 --- 10.0.0.1 ping statistics --- 00:23:49.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.062 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2186567 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2186567 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 2186567 ']' 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:49.062 11:02:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:49.062 [2024-07-12 11:02:05.041648] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:49.062 [2024-07-12 11:02:05.041736] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:49.062 EAL: No free 2048 kB hugepages reported on node 1 00:23:49.062 [2024-07-12 11:02:05.127402] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.062 [2024-07-12 11:02:05.221258] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:49.062 [2024-07-12 11:02:05.221313] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:49.062 [2024-07-12 11:02:05.221321] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:49.063 [2024-07-12 11:02:05.221328] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:49.063 [2024-07-12 11:02:05.221334] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:49.063 [2024-07-12 11:02:05.221368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.063 11:02:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:49.063 11:02:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:23:49.063 11:02:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:49.063 11:02:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:49.063 11:02:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:49.063 11:02:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:49.063 11:02:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:49.063 11:02:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.063 11:02:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:49.063 [2024-07-12 11:02:05.873998] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:49.063 11:02:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.063 11:02:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:49.063 11:02:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.063 11:02:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:49.063 null0 00:23:49.063 11:02:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.063 11:02:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:49.063 11:02:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.063 11:02:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:49.063 11:02:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.063 11:02:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:49.063 11:02:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.063 11:02:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:49.063 11:02:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.063 11:02:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g bab5fcf02b864703acda0fbd728af79b 00:23:49.063 11:02:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.063 11:02:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:49.063 11:02:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.063 11:02:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:49.063 11:02:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.063 11:02:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:49.063 [2024-07-12 11:02:05.934325] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:49.063 11:02:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.063 11:02:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:49.063 11:02:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.063 11:02:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:49.324 nvme0n1 00:23:49.324 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.324 11:02:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:49.324 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.324 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:49.324 [ 00:23:49.324 { 00:23:49.324 "name": "nvme0n1", 00:23:49.324 "aliases": [ 00:23:49.324 "bab5fcf0-2b86-4703-acda-0fbd728af79b" 00:23:49.324 ], 00:23:49.324 "product_name": "NVMe disk", 00:23:49.324 "block_size": 512, 00:23:49.324 "num_blocks": 2097152, 00:23:49.324 "uuid": "bab5fcf0-2b86-4703-acda-0fbd728af79b", 00:23:49.324 "assigned_rate_limits": { 00:23:49.324 "rw_ios_per_sec": 0, 00:23:49.324 "rw_mbytes_per_sec": 0, 00:23:49.324 "r_mbytes_per_sec": 0, 00:23:49.324 "w_mbytes_per_sec": 0 00:23:49.324 }, 00:23:49.324 "claimed": false, 00:23:49.324 "zoned": false, 00:23:49.324 "supported_io_types": { 00:23:49.324 "read": true, 00:23:49.324 "write": true, 00:23:49.324 "unmap": false, 00:23:49.324 "flush": true, 00:23:49.324 "reset": true, 00:23:49.324 "nvme_admin": true, 00:23:49.324 "nvme_io": true, 00:23:49.324 "nvme_io_md": false, 00:23:49.324 "write_zeroes": true, 00:23:49.324 "zcopy": false, 00:23:49.324 "get_zone_info": false, 00:23:49.324 "zone_management": false, 00:23:49.324 "zone_append": false, 00:23:49.324 "compare": true, 00:23:49.324 "compare_and_write": true, 00:23:49.324 "abort": true, 00:23:49.324 "seek_hole": false, 00:23:49.324 "seek_data": false, 00:23:49.324 "copy": true, 00:23:49.324 "nvme_iov_md": false 00:23:49.324 }, 00:23:49.324 "memory_domains": [ 00:23:49.324 { 00:23:49.324 "dma_device_id": "system", 00:23:49.324 "dma_device_type": 1 00:23:49.324 } 00:23:49.324 ], 00:23:49.325 "driver_specific": { 00:23:49.325 "nvme": [ 00:23:49.325 { 00:23:49.325 "trid": { 00:23:49.325 "trtype": "TCP", 00:23:49.325 "adrfam": "IPv4", 00:23:49.325 "traddr": "10.0.0.2", 00:23:49.325 "trsvcid": "4420", 00:23:49.325 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:49.325 }, 00:23:49.325 "ctrlr_data": { 00:23:49.325 "cntlid": 1, 00:23:49.325 "vendor_id": "0x8086", 00:23:49.325 "model_number": "SPDK bdev Controller", 00:23:49.325 "serial_number": "00000000000000000000", 00:23:49.325 "firmware_revision": "24.09", 00:23:49.325 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:49.325 "oacs": { 00:23:49.325 "security": 0, 00:23:49.325 "format": 0, 00:23:49.325 "firmware": 0, 00:23:49.325 "ns_manage": 0 00:23:49.325 }, 00:23:49.325 "multi_ctrlr": true, 00:23:49.325 "ana_reporting": false 00:23:49.325 }, 00:23:49.325 "vs": { 00:23:49.325 "nvme_version": "1.3" 00:23:49.325 }, 00:23:49.325 "ns_data": { 00:23:49.325 "id": 1, 00:23:49.325 "can_share": true 00:23:49.325 } 00:23:49.325 } 00:23:49.325 ], 00:23:49.325 "mp_policy": "active_passive" 00:23:49.325 } 00:23:49.325 } 00:23:49.325 ] 00:23:49.325 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.325 11:02:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:49.325 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.325 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:49.325 [2024-07-12 11:02:06.210841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:49.325 [2024-07-12 11:02:06.210926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1842df0 (9): Bad file descriptor 00:23:49.586 [2024-07-12 11:02:06.343270] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:49.586 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.586 11:02:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:49.586 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.586 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:49.586 [ 00:23:49.586 { 00:23:49.586 "name": "nvme0n1", 00:23:49.586 "aliases": [ 00:23:49.586 "bab5fcf0-2b86-4703-acda-0fbd728af79b" 00:23:49.586 ], 00:23:49.586 "product_name": "NVMe disk", 00:23:49.586 "block_size": 512, 00:23:49.586 "num_blocks": 2097152, 00:23:49.586 "uuid": "bab5fcf0-2b86-4703-acda-0fbd728af79b", 00:23:49.586 "assigned_rate_limits": { 00:23:49.586 "rw_ios_per_sec": 0, 00:23:49.586 "rw_mbytes_per_sec": 0, 00:23:49.586 "r_mbytes_per_sec": 0, 00:23:49.586 "w_mbytes_per_sec": 0 00:23:49.586 }, 00:23:49.586 "claimed": false, 00:23:49.586 "zoned": false, 00:23:49.586 "supported_io_types": { 00:23:49.586 "read": true, 00:23:49.586 "write": true, 00:23:49.586 "unmap": false, 00:23:49.586 "flush": true, 00:23:49.586 "reset": true, 00:23:49.586 "nvme_admin": true, 00:23:49.586 "nvme_io": true, 00:23:49.586 "nvme_io_md": false, 00:23:49.586 "write_zeroes": true, 00:23:49.586 "zcopy": false, 00:23:49.586 "get_zone_info": false, 00:23:49.586 "zone_management": false, 00:23:49.586 "zone_append": false, 00:23:49.586 "compare": true, 00:23:49.586 "compare_and_write": true, 00:23:49.586 "abort": true, 00:23:49.586 "seek_hole": false, 00:23:49.586 "seek_data": false, 00:23:49.586 "copy": true, 00:23:49.586 "nvme_iov_md": false 00:23:49.586 }, 00:23:49.586 "memory_domains": [ 00:23:49.586 { 00:23:49.586 "dma_device_id": "system", 00:23:49.586 "dma_device_type": 1 00:23:49.586 } 00:23:49.586 ], 00:23:49.586 "driver_specific": { 00:23:49.586 "nvme": [ 00:23:49.586 { 00:23:49.586 "trid": { 00:23:49.586 "trtype": "TCP", 00:23:49.586 "adrfam": "IPv4", 00:23:49.586 "traddr": "10.0.0.2", 00:23:49.586 "trsvcid": "4420", 00:23:49.586 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:49.586 }, 00:23:49.586 "ctrlr_data": { 00:23:49.586 "cntlid": 2, 00:23:49.586 "vendor_id": "0x8086", 00:23:49.586 "model_number": "SPDK bdev Controller", 00:23:49.586 "serial_number": "00000000000000000000", 00:23:49.586 "firmware_revision": "24.09", 00:23:49.586 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:49.586 "oacs": { 00:23:49.586 "security": 0, 00:23:49.586 "format": 0, 00:23:49.586 "firmware": 0, 00:23:49.586 "ns_manage": 0 00:23:49.586 }, 00:23:49.586 "multi_ctrlr": true, 00:23:49.586 "ana_reporting": false 00:23:49.586 }, 00:23:49.586 "vs": { 00:23:49.586 "nvme_version": "1.3" 00:23:49.586 }, 00:23:49.586 "ns_data": { 00:23:49.586 "id": 1, 00:23:49.586 "can_share": true 00:23:49.586 } 00:23:49.586 } 00:23:49.586 ], 00:23:49.586 "mp_policy": "active_passive" 00:23:49.586 } 00:23:49.586 } 00:23:49.586 ] 00:23:49.587 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.587 11:02:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.587 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.587 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:49.587 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.587 11:02:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:49.587 11:02:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.tg1up6rVSn 00:23:49.587 11:02:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:49.587 11:02:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.tg1up6rVSn 00:23:49.587 11:02:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:49.587 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.587 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:49.587 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.587 11:02:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:49.587 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.587 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:49.587 [2024-07-12 11:02:06.423520] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:49.587 [2024-07-12 11:02:06.423699] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:49.587 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.587 11:02:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tg1up6rVSn 00:23:49.587 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.587 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:49.587 [2024-07-12 11:02:06.435546] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:49.587 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.587 11:02:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tg1up6rVSn 00:23:49.587 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.587 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:49.587 [2024-07-12 11:02:06.447595] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:49.587 [2024-07-12 11:02:06.447644] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:49.587 nvme0n1 00:23:49.587 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.587 11:02:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:49.587 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.587 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:49.587 [ 00:23:49.587 { 00:23:49.587 "name": "nvme0n1", 00:23:49.587 "aliases": [ 00:23:49.587 "bab5fcf0-2b86-4703-acda-0fbd728af79b" 00:23:49.587 ], 00:23:49.587 "product_name": "NVMe disk", 00:23:49.587 "block_size": 512, 00:23:49.587 "num_blocks": 2097152, 00:23:49.587 "uuid": "bab5fcf0-2b86-4703-acda-0fbd728af79b", 00:23:49.587 "assigned_rate_limits": { 00:23:49.587 "rw_ios_per_sec": 0, 00:23:49.587 "rw_mbytes_per_sec": 0, 00:23:49.587 "r_mbytes_per_sec": 0, 00:23:49.587 "w_mbytes_per_sec": 0 00:23:49.587 }, 00:23:49.587 "claimed": false, 00:23:49.587 "zoned": false, 00:23:49.587 "supported_io_types": { 00:23:49.587 "read": true, 00:23:49.587 "write": true, 00:23:49.587 "unmap": false, 00:23:49.587 "flush": true, 00:23:49.587 "reset": true, 00:23:49.587 "nvme_admin": true, 00:23:49.587 "nvme_io": true, 00:23:49.587 "nvme_io_md": false, 00:23:49.587 "write_zeroes": true, 00:23:49.587 "zcopy": false, 00:23:49.587 "get_zone_info": false, 00:23:49.587 "zone_management": false, 00:23:49.587 "zone_append": false, 00:23:49.587 "compare": true, 00:23:49.587 "compare_and_write": true, 00:23:49.587 "abort": true, 00:23:49.587 "seek_hole": false, 00:23:49.587 "seek_data": false, 00:23:49.587 "copy": true, 00:23:49.587 "nvme_iov_md": false 00:23:49.587 }, 00:23:49.587 "memory_domains": [ 00:23:49.587 { 00:23:49.587 "dma_device_id": "system", 00:23:49.587 "dma_device_type": 1 00:23:49.587 } 00:23:49.587 ], 00:23:49.587 "driver_specific": { 00:23:49.587 "nvme": [ 00:23:49.587 { 00:23:49.587 "trid": { 00:23:49.587 "trtype": "TCP", 00:23:49.587 "adrfam": "IPv4", 00:23:49.587 "traddr": "10.0.0.2", 00:23:49.587 "trsvcid": "4421", 00:23:49.587 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:49.587 }, 00:23:49.587 "ctrlr_data": { 00:23:49.587 "cntlid": 3, 00:23:49.587 "vendor_id": "0x8086", 00:23:49.587 "model_number": "SPDK bdev Controller", 00:23:49.587 "serial_number": "00000000000000000000", 00:23:49.587 "firmware_revision": "24.09", 00:23:49.587 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:49.587 "oacs": { 00:23:49.587 "security": 0, 00:23:49.587 "format": 0, 00:23:49.587 "firmware": 0, 00:23:49.587 "ns_manage": 0 00:23:49.587 }, 00:23:49.587 "multi_ctrlr": true, 00:23:49.587 "ana_reporting": false 00:23:49.587 }, 00:23:49.587 "vs": { 00:23:49.587 "nvme_version": "1.3" 00:23:49.587 }, 00:23:49.587 "ns_data": { 00:23:49.587 "id": 1, 00:23:49.587 "can_share": true 00:23:49.587 } 00:23:49.587 } 00:23:49.587 ], 00:23:49.587 "mp_policy": "active_passive" 00:23:49.587 } 00:23:49.587 } 00:23:49.587 ] 00:23:49.587 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.587 11:02:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.587 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.587 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:49.587 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.587 11:02:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.tg1up6rVSn 00:23:49.848 11:02:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:49.848 11:02:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:23:49.848 11:02:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:49.848 11:02:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:23:49.848 11:02:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:49.848 11:02:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:23:49.848 11:02:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:49.848 11:02:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:49.848 rmmod nvme_tcp 00:23:49.848 rmmod nvme_fabrics 00:23:49.848 rmmod nvme_keyring 00:23:49.848 11:02:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:49.848 11:02:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:23:49.848 11:02:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:23:49.848 11:02:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2186567 ']' 00:23:49.848 11:02:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2186567 00:23:49.848 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 2186567 ']' 00:23:49.848 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 2186567 00:23:49.848 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:23:49.848 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:49.848 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2186567 00:23:49.848 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:49.848 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:49.848 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2186567' 00:23:49.848 killing process with pid 2186567 00:23:49.848 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 2186567 00:23:49.848 [2024-07-12 11:02:06.699649] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:49.848 [2024-07-12 11:02:06.699689] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:49.848 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 2186567 00:23:50.107 11:02:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:50.107 11:02:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:50.107 11:02:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:50.107 11:02:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:50.107 11:02:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:50.107 11:02:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.107 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:50.107 11:02:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.018 11:02:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:52.018 00:23:52.018 real 0m11.460s 00:23:52.018 user 0m4.058s 00:23:52.018 sys 0m5.889s 00:23:52.018 11:02:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:52.018 11:02:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:52.018 ************************************ 00:23:52.018 END TEST nvmf_async_init 00:23:52.018 ************************************ 00:23:52.018 11:02:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:52.018 11:02:08 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:52.018 11:02:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:52.018 11:02:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:52.018 11:02:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:52.278 ************************************ 00:23:52.278 START TEST dma 00:23:52.278 ************************************ 00:23:52.279 11:02:09 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:52.279 * Looking for test storage... 00:23:52.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:52.279 11:02:09 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:52.279 11:02:09 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:23:52.279 11:02:09 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:52.279 11:02:09 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:52.279 11:02:09 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:52.279 11:02:09 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:52.279 11:02:09 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:52.279 11:02:09 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:52.279 11:02:09 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:52.279 11:02:09 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:52.279 11:02:09 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:52.279 11:02:09 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:52.279 11:02:09 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:52.279 11:02:09 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:52.279 11:02:09 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:52.279 11:02:09 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:52.279 11:02:09 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:52.279 11:02:09 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:52.279 11:02:09 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:52.279 11:02:09 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:52.279 11:02:09 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:52.279 11:02:09 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:52.279 11:02:09 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.279 11:02:09 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.279 11:02:09 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.279 11:02:09 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:23:52.279 11:02:09 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.279 11:02:09 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:23:52.279 11:02:09 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:52.279 11:02:09 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:52.279 11:02:09 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:52.279 11:02:09 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:52.279 11:02:09 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:52.279 11:02:09 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:52.279 11:02:09 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:52.279 11:02:09 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:52.279 11:02:09 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:52.279 11:02:09 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:23:52.279 00:23:52.279 real 0m0.133s 00:23:52.279 user 0m0.068s 00:23:52.279 sys 0m0.075s 00:23:52.279 11:02:09 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:52.279 11:02:09 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:23:52.279 ************************************ 00:23:52.279 END TEST dma 00:23:52.279 ************************************ 00:23:52.279 11:02:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:52.279 11:02:09 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:52.279 11:02:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:52.279 11:02:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:52.279 11:02:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:52.279 ************************************ 00:23:52.279 START TEST nvmf_identify 00:23:52.279 ************************************ 00:23:52.279 11:02:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:52.539 * Looking for test storage... 00:23:52.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:23:52.539 11:02:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:00.683 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:00.683 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:00.683 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:00.683 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:00.683 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:00.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:00.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:24:00.684 00:24:00.684 --- 10.0.0.2 ping statistics --- 00:24:00.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.684 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:00.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:00.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.386 ms 00:24:00.684 00:24:00.684 --- 10.0.0.1 ping statistics --- 00:24:00.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.684 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2191059 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2191059 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 2191059 ']' 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:00.684 11:02:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:00.684 [2024-07-12 11:02:16.802834] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:00.684 [2024-07-12 11:02:16.802897] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.684 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.684 [2024-07-12 11:02:16.888581] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:00.684 [2024-07-12 11:02:16.987973] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.684 [2024-07-12 11:02:16.988035] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.684 [2024-07-12 11:02:16.988043] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.684 [2024-07-12 11:02:16.988050] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.684 [2024-07-12 11:02:16.988057] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.684 [2024-07-12 11:02:16.988174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.684 [2024-07-12 11:02:16.988259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:00.684 [2024-07-12 11:02:16.988567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:00.684 [2024-07-12 11:02:16.988572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.684 11:02:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:00.684 11:02:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:24:00.684 11:02:17 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:00.684 11:02:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.684 11:02:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:00.684 [2024-07-12 11:02:17.615237] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:00.684 11:02:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.684 11:02:17 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:00.684 11:02:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:00.684 11:02:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:00.947 11:02:17 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:00.947 11:02:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.947 11:02:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:00.947 Malloc0 00:24:00.947 11:02:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.947 11:02:17 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:00.947 11:02:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.947 11:02:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:00.947 11:02:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.947 11:02:17 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:00.947 11:02:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.947 11:02:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:00.947 11:02:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.947 11:02:17 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:00.947 11:02:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.947 11:02:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:00.947 [2024-07-12 11:02:17.725106] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:00.947 11:02:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.947 11:02:17 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:00.947 11:02:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.947 11:02:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:00.947 11:02:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.947 11:02:17 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:00.947 11:02:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.947 11:02:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:00.947 [ 00:24:00.947 { 00:24:00.947 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:00.947 "subtype": "Discovery", 00:24:00.947 "listen_addresses": [ 00:24:00.947 { 00:24:00.947 "trtype": "TCP", 00:24:00.947 "adrfam": "IPv4", 00:24:00.947 "traddr": "10.0.0.2", 00:24:00.947 "trsvcid": "4420" 00:24:00.947 } 00:24:00.947 ], 00:24:00.947 "allow_any_host": true, 00:24:00.947 "hosts": [] 00:24:00.947 }, 00:24:00.947 { 00:24:00.947 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.947 "subtype": "NVMe", 00:24:00.947 "listen_addresses": [ 00:24:00.947 { 00:24:00.947 "trtype": "TCP", 00:24:00.947 "adrfam": "IPv4", 00:24:00.947 "traddr": "10.0.0.2", 00:24:00.947 "trsvcid": "4420" 00:24:00.947 } 00:24:00.947 ], 00:24:00.947 "allow_any_host": true, 00:24:00.947 "hosts": [], 00:24:00.947 "serial_number": "SPDK00000000000001", 00:24:00.947 "model_number": "SPDK bdev Controller", 00:24:00.947 "max_namespaces": 32, 00:24:00.947 "min_cntlid": 1, 00:24:00.947 "max_cntlid": 65519, 00:24:00.947 "namespaces": [ 00:24:00.947 { 00:24:00.947 "nsid": 1, 00:24:00.947 "bdev_name": "Malloc0", 00:24:00.947 "name": "Malloc0", 00:24:00.947 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:00.947 "eui64": "ABCDEF0123456789", 00:24:00.947 "uuid": "7fb02325-6234-4a1d-a102-3ef9757d7d38" 00:24:00.947 } 00:24:00.947 ] 00:24:00.947 } 00:24:00.947 ] 00:24:00.947 11:02:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.947 11:02:17 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:00.947 [2024-07-12 11:02:17.789647] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:00.947 [2024-07-12 11:02:17.789721] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2191316 ] 00:24:00.947 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.947 [2024-07-12 11:02:17.827262] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:00.947 [2024-07-12 11:02:17.827326] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:00.947 [2024-07-12 11:02:17.827332] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:00.947 [2024-07-12 11:02:17.827351] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:00.947 [2024-07-12 11:02:17.827359] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:00.947 [2024-07-12 11:02:17.827970] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:00.947 [2024-07-12 11:02:17.828009] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x17dfec0 0 00:24:00.947 [2024-07-12 11:02:17.842137] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:00.947 [2024-07-12 11:02:17.842151] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:00.947 [2024-07-12 11:02:17.842157] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:00.947 [2024-07-12 11:02:17.842160] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:00.947 [2024-07-12 11:02:17.842210] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.947 [2024-07-12 11:02:17.842217] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.947 [2024-07-12 11:02:17.842221] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17dfec0) 00:24:00.947 [2024-07-12 11:02:17.842240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:00.947 [2024-07-12 11:02:17.842260] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1862e40, cid 0, qid 0 00:24:00.947 [2024-07-12 11:02:17.850136] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.947 [2024-07-12 11:02:17.850146] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.947 [2024-07-12 11:02:17.850149] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.947 [2024-07-12 11:02:17.850154] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1862e40) on tqpair=0x17dfec0 00:24:00.947 [2024-07-12 11:02:17.850167] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:00.947 [2024-07-12 11:02:17.850175] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:00.947 [2024-07-12 11:02:17.850180] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:00.947 [2024-07-12 11:02:17.850197] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.947 [2024-07-12 11:02:17.850201] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.947 [2024-07-12 11:02:17.850204] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17dfec0) 00:24:00.947 [2024-07-12 11:02:17.850212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.947 [2024-07-12 11:02:17.850226] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1862e40, cid 0, qid 0 00:24:00.948 [2024-07-12 11:02:17.850477] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.948 [2024-07-12 11:02:17.850485] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.948 [2024-07-12 11:02:17.850489] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.948 [2024-07-12 11:02:17.850492] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1862e40) on tqpair=0x17dfec0 00:24:00.948 [2024-07-12 11:02:17.850499] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:00.948 [2024-07-12 11:02:17.850506] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:00.948 [2024-07-12 11:02:17.850514] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.948 [2024-07-12 11:02:17.850518] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.948 [2024-07-12 11:02:17.850521] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17dfec0) 00:24:00.948 [2024-07-12 11:02:17.850528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.948 [2024-07-12 11:02:17.850544] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1862e40, cid 0, qid 0 00:24:00.948 [2024-07-12 11:02:17.850780] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.948 [2024-07-12 11:02:17.850787] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.948 [2024-07-12 11:02:17.850790] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.948 [2024-07-12 11:02:17.850795] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1862e40) on tqpair=0x17dfec0 00:24:00.948 [2024-07-12 11:02:17.850800] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:00.948 [2024-07-12 11:02:17.850809] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:00.948 [2024-07-12 11:02:17.850816] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.948 [2024-07-12 11:02:17.850819] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.948 [2024-07-12 11:02:17.850823] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17dfec0) 00:24:00.948 [2024-07-12 11:02:17.850829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.948 [2024-07-12 11:02:17.850840] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1862e40, cid 0, qid 0 00:24:00.948 [2024-07-12 11:02:17.851057] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.948 [2024-07-12 11:02:17.851065] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.948 [2024-07-12 11:02:17.851071] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.948 [2024-07-12 11:02:17.851076] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1862e40) on tqpair=0x17dfec0 00:24:00.948 [2024-07-12 11:02:17.851082] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:00.948 [2024-07-12 11:02:17.851091] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.948 [2024-07-12 11:02:17.851096] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.948 [2024-07-12 11:02:17.851099] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17dfec0) 00:24:00.948 [2024-07-12 11:02:17.851106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.948 [2024-07-12 11:02:17.851116] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1862e40, cid 0, qid 0 00:24:00.948 [2024-07-12 11:02:17.851342] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.948 [2024-07-12 11:02:17.851349] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.948 [2024-07-12 11:02:17.851353] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.948 [2024-07-12 11:02:17.851357] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1862e40) on tqpair=0x17dfec0 00:24:00.948 [2024-07-12 11:02:17.851362] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:00.948 [2024-07-12 11:02:17.851367] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:00.948 [2024-07-12 11:02:17.851374] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:00.948 [2024-07-12 11:02:17.851480] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:00.948 [2024-07-12 11:02:17.851484] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:00.948 [2024-07-12 11:02:17.851494] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.948 [2024-07-12 11:02:17.851498] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.948 [2024-07-12 11:02:17.851505] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17dfec0) 00:24:00.948 [2024-07-12 11:02:17.851512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.948 [2024-07-12 11:02:17.851522] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1862e40, cid 0, qid 0 00:24:00.948 [2024-07-12 11:02:17.851725] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.948 [2024-07-12 11:02:17.851732] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.948 [2024-07-12 11:02:17.851736] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.948 [2024-07-12 11:02:17.851741] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1862e40) on tqpair=0x17dfec0 00:24:00.948 [2024-07-12 11:02:17.851748] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:00.948 [2024-07-12 11:02:17.851758] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.948 [2024-07-12 11:02:17.851762] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.948 [2024-07-12 11:02:17.851765] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17dfec0) 00:24:00.948 [2024-07-12 11:02:17.851772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.948 [2024-07-12 11:02:17.851782] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1862e40, cid 0, qid 0 00:24:00.948 [2024-07-12 11:02:17.851991] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.948 [2024-07-12 11:02:17.851999] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.948 [2024-07-12 11:02:17.852003] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.948 [2024-07-12 11:02:17.852007] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1862e40) on tqpair=0x17dfec0 00:24:00.948 [2024-07-12 11:02:17.852012] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:00.948 [2024-07-12 11:02:17.852016] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:00.948 [2024-07-12 11:02:17.852024] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:00.948 [2024-07-12 11:02:17.852034] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:00.948 [2024-07-12 11:02:17.852044] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.948 [2024-07-12 11:02:17.852050] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17dfec0) 00:24:00.948 [2024-07-12 11:02:17.852058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.948 [2024-07-12 11:02:17.852068] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1862e40, cid 0, qid 0 00:24:00.948 [2024-07-12 11:02:17.852373] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:00.948 [2024-07-12 11:02:17.852382] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:00.948 [2024-07-12 11:02:17.852386] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:00.948 [2024-07-12 11:02:17.852391] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17dfec0): datao=0, datal=4096, cccid=0 00:24:00.948 [2024-07-12 11:02:17.852395] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1862e40) on tqpair(0x17dfec0): expected_datao=0, payload_size=4096 00:24:00.948 [2024-07-12 11:02:17.852400] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.948 [2024-07-12 11:02:17.852409] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:00.948 [2024-07-12 11:02:17.852413] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:00.948 [2024-07-12 11:02:17.898131] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.948 [2024-07-12 11:02:17.898142] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.948 [2024-07-12 11:02:17.898146] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.948 [2024-07-12 11:02:17.898151] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1862e40) on tqpair=0x17dfec0 00:24:00.948 [2024-07-12 11:02:17.898163] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:00.948 [2024-07-12 11:02:17.898174] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:00.948 [2024-07-12 11:02:17.898179] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:00.948 [2024-07-12 11:02:17.898185] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:00.948 [2024-07-12 11:02:17.898190] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:00.948 [2024-07-12 11:02:17.898196] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:00.948 [2024-07-12 11:02:17.898206] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:00.948 [2024-07-12 11:02:17.898215] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.948 [2024-07-12 11:02:17.898221] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.948 [2024-07-12 11:02:17.898225] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17dfec0) 00:24:00.948 [2024-07-12 11:02:17.898233] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:00.948 [2024-07-12 11:02:17.898248] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1862e40, cid 0, qid 0 00:24:00.948 [2024-07-12 11:02:17.898461] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.948 [2024-07-12 11:02:17.898468] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.948 [2024-07-12 11:02:17.898471] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.948 [2024-07-12 11:02:17.898475] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1862e40) on tqpair=0x17dfec0 00:24:00.948 [2024-07-12 11:02:17.898485] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.948 [2024-07-12 11:02:17.898488] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.948 [2024-07-12 11:02:17.898492] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17dfec0) 00:24:00.948 [2024-07-12 11:02:17.898499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.948 [2024-07-12 11:02:17.898506] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.948 [2024-07-12 11:02:17.898509] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.948 [2024-07-12 11:02:17.898513] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x17dfec0) 00:24:00.948 [2024-07-12 11:02:17.898519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.948 [2024-07-12 11:02:17.898525] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.948 [2024-07-12 11:02:17.898530] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.949 [2024-07-12 11:02:17.898533] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x17dfec0) 00:24:00.949 [2024-07-12 11:02:17.898539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.949 [2024-07-12 11:02:17.898545] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.949 [2024-07-12 11:02:17.898554] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.949 [2024-07-12 11:02:17.898558] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17dfec0) 00:24:00.949 [2024-07-12 11:02:17.898564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.949 [2024-07-12 11:02:17.898569] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:00.949 [2024-07-12 11:02:17.898582] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:00.949 [2024-07-12 11:02:17.898590] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.949 [2024-07-12 11:02:17.898594] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17dfec0) 00:24:00.949 [2024-07-12 11:02:17.898601] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.949 [2024-07-12 11:02:17.898615] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1862e40, cid 0, qid 0 00:24:00.949 [2024-07-12 11:02:17.898621] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1862fc0, cid 1, qid 0 00:24:00.949 [2024-07-12 11:02:17.898626] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1863140, cid 2, qid 0 00:24:00.949 [2024-07-12 11:02:17.898631] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18632c0, cid 3, qid 0 00:24:00.949 [2024-07-12 11:02:17.898636] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1863440, cid 4, qid 0 00:24:00.949 [2024-07-12 11:02:17.898885] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.949 [2024-07-12 11:02:17.898893] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.949 [2024-07-12 11:02:17.898896] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.949 [2024-07-12 11:02:17.898901] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1863440) on tqpair=0x17dfec0 00:24:00.949 [2024-07-12 11:02:17.898907] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:00.949 [2024-07-12 11:02:17.898913] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:00.949 [2024-07-12 11:02:17.898926] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.949 [2024-07-12 11:02:17.898930] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17dfec0) 00:24:00.949 [2024-07-12 11:02:17.898936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.949 [2024-07-12 11:02:17.898946] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1863440, cid 4, qid 0 00:24:00.949 [2024-07-12 11:02:17.899182] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:00.949 [2024-07-12 11:02:17.899190] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:00.949 [2024-07-12 11:02:17.899193] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:00.949 [2024-07-12 11:02:17.899197] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17dfec0): datao=0, datal=4096, cccid=4 00:24:00.949 [2024-07-12 11:02:17.899202] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1863440) on tqpair(0x17dfec0): expected_datao=0, payload_size=4096 00:24:00.949 [2024-07-12 11:02:17.899206] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.949 [2024-07-12 11:02:17.899244] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:00.949 [2024-07-12 11:02:17.899248] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:00.949 [2024-07-12 11:02:17.899457] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.949 [2024-07-12 11:02:17.899464] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.949 [2024-07-12 11:02:17.899467] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.949 [2024-07-12 11:02:17.899474] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1863440) on tqpair=0x17dfec0 00:24:00.949 [2024-07-12 11:02:17.899490] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:00.949 [2024-07-12 11:02:17.899519] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.949 [2024-07-12 11:02:17.899524] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17dfec0) 00:24:00.949 [2024-07-12 11:02:17.899531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.949 [2024-07-12 11:02:17.899539] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.949 [2024-07-12 11:02:17.899542] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.949 [2024-07-12 11:02:17.899546] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17dfec0) 00:24:00.949 [2024-07-12 11:02:17.899552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.949 [2024-07-12 11:02:17.899568] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1863440, cid 4, qid 0 00:24:00.949 [2024-07-12 11:02:17.899573] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18635c0, cid 5, qid 0 00:24:00.949 [2024-07-12 11:02:17.899833] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:00.949 [2024-07-12 11:02:17.899840] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:00.949 [2024-07-12 11:02:17.899843] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:00.949 [2024-07-12 11:02:17.899847] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17dfec0): datao=0, datal=1024, cccid=4 00:24:00.949 [2024-07-12 11:02:17.899851] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1863440) on tqpair(0x17dfec0): expected_datao=0, payload_size=1024 00:24:00.949 [2024-07-12 11:02:17.899855] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.949 [2024-07-12 11:02:17.899862] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:00.949 [2024-07-12 11:02:17.899865] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:00.949 [2024-07-12 11:02:17.899871] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.949 [2024-07-12 11:02:17.899877] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.949 [2024-07-12 11:02:17.899880] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.949 [2024-07-12 11:02:17.899884] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18635c0) on tqpair=0x17dfec0 00:24:01.215 [2024-07-12 11:02:17.944134] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.215 [2024-07-12 11:02:17.944150] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.215 [2024-07-12 11:02:17.944154] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.215 [2024-07-12 11:02:17.944158] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1863440) on tqpair=0x17dfec0 00:24:01.215 [2024-07-12 11:02:17.944183] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.215 [2024-07-12 11:02:17.944187] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17dfec0) 00:24:01.215 [2024-07-12 11:02:17.944195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.215 [2024-07-12 11:02:17.944213] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1863440, cid 4, qid 0 00:24:01.215 [2024-07-12 11:02:17.944468] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:01.215 [2024-07-12 11:02:17.944475] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:01.215 [2024-07-12 11:02:17.944479] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:01.215 [2024-07-12 11:02:17.944483] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17dfec0): datao=0, datal=3072, cccid=4 00:24:01.215 [2024-07-12 11:02:17.944487] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1863440) on tqpair(0x17dfec0): expected_datao=0, payload_size=3072 00:24:01.215 [2024-07-12 11:02:17.944496] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.215 [2024-07-12 11:02:17.944535] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:01.215 [2024-07-12 11:02:17.944539] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:01.215 [2024-07-12 11:02:17.986301] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.215 [2024-07-12 11:02:17.986312] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.215 [2024-07-12 11:02:17.986315] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.215 [2024-07-12 11:02:17.986319] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1863440) on tqpair=0x17dfec0 00:24:01.215 [2024-07-12 11:02:17.986331] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.215 [2024-07-12 11:02:17.986335] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17dfec0) 00:24:01.215 [2024-07-12 11:02:17.986342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.215 [2024-07-12 11:02:17.986359] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1863440, cid 4, qid 0 00:24:01.215 [2024-07-12 11:02:17.986617] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:01.215 [2024-07-12 11:02:17.986623] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:01.215 [2024-07-12 11:02:17.986627] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:01.215 [2024-07-12 11:02:17.986630] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17dfec0): datao=0, datal=8, cccid=4 00:24:01.215 [2024-07-12 11:02:17.986635] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1863440) on tqpair(0x17dfec0): expected_datao=0, payload_size=8 00:24:01.215 [2024-07-12 11:02:17.986639] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.215 [2024-07-12 11:02:17.986645] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:01.215 [2024-07-12 11:02:17.986649] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:01.215 [2024-07-12 11:02:18.028378] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.215 [2024-07-12 11:02:18.028390] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.215 [2024-07-12 11:02:18.028394] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.215 [2024-07-12 11:02:18.028399] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1863440) on tqpair=0x17dfec0 00:24:01.215 ===================================================== 00:24:01.215 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:01.215 ===================================================== 00:24:01.215 Controller Capabilities/Features 00:24:01.215 ================================ 00:24:01.215 Vendor ID: 0000 00:24:01.215 Subsystem Vendor ID: 0000 00:24:01.215 Serial Number: .................... 00:24:01.215 Model Number: ........................................ 00:24:01.215 Firmware Version: 24.09 00:24:01.215 Recommended Arb Burst: 0 00:24:01.215 IEEE OUI Identifier: 00 00 00 00:24:01.215 Multi-path I/O 00:24:01.215 May have multiple subsystem ports: No 00:24:01.215 May have multiple controllers: No 00:24:01.215 Associated with SR-IOV VF: No 00:24:01.215 Max Data Transfer Size: 131072 00:24:01.215 Max Number of Namespaces: 0 00:24:01.215 Max Number of I/O Queues: 1024 00:24:01.215 NVMe Specification Version (VS): 1.3 00:24:01.215 NVMe Specification Version (Identify): 1.3 00:24:01.215 Maximum Queue Entries: 128 00:24:01.215 Contiguous Queues Required: Yes 00:24:01.215 Arbitration Mechanisms Supported 00:24:01.215 Weighted Round Robin: Not Supported 00:24:01.215 Vendor Specific: Not Supported 00:24:01.215 Reset Timeout: 15000 ms 00:24:01.215 Doorbell Stride: 4 bytes 00:24:01.215 NVM Subsystem Reset: Not Supported 00:24:01.215 Command Sets Supported 00:24:01.215 NVM Command Set: Supported 00:24:01.215 Boot Partition: Not Supported 00:24:01.215 Memory Page Size Minimum: 4096 bytes 00:24:01.215 Memory Page Size Maximum: 4096 bytes 00:24:01.215 Persistent Memory Region: Not Supported 00:24:01.215 Optional Asynchronous Events Supported 00:24:01.215 Namespace Attribute Notices: Not Supported 00:24:01.215 Firmware Activation Notices: Not Supported 00:24:01.215 ANA Change Notices: Not Supported 00:24:01.215 PLE Aggregate Log Change Notices: Not Supported 00:24:01.215 LBA Status Info Alert Notices: Not Supported 00:24:01.215 EGE Aggregate Log Change Notices: Not Supported 00:24:01.215 Normal NVM Subsystem Shutdown event: Not Supported 00:24:01.215 Zone Descriptor Change Notices: Not Supported 00:24:01.215 Discovery Log Change Notices: Supported 00:24:01.215 Controller Attributes 00:24:01.215 128-bit Host Identifier: Not Supported 00:24:01.215 Non-Operational Permissive Mode: Not Supported 00:24:01.215 NVM Sets: Not Supported 00:24:01.215 Read Recovery Levels: Not Supported 00:24:01.215 Endurance Groups: Not Supported 00:24:01.215 Predictable Latency Mode: Not Supported 00:24:01.215 Traffic Based Keep ALive: Not Supported 00:24:01.215 Namespace Granularity: Not Supported 00:24:01.215 SQ Associations: Not Supported 00:24:01.215 UUID List: Not Supported 00:24:01.215 Multi-Domain Subsystem: Not Supported 00:24:01.215 Fixed Capacity Management: Not Supported 00:24:01.215 Variable Capacity Management: Not Supported 00:24:01.215 Delete Endurance Group: Not Supported 00:24:01.215 Delete NVM Set: Not Supported 00:24:01.215 Extended LBA Formats Supported: Not Supported 00:24:01.215 Flexible Data Placement Supported: Not Supported 00:24:01.215 00:24:01.215 Controller Memory Buffer Support 00:24:01.215 ================================ 00:24:01.215 Supported: No 00:24:01.215 00:24:01.215 Persistent Memory Region Support 00:24:01.215 ================================ 00:24:01.215 Supported: No 00:24:01.215 00:24:01.215 Admin Command Set Attributes 00:24:01.215 ============================ 00:24:01.215 Security Send/Receive: Not Supported 00:24:01.215 Format NVM: Not Supported 00:24:01.215 Firmware Activate/Download: Not Supported 00:24:01.215 Namespace Management: Not Supported 00:24:01.215 Device Self-Test: Not Supported 00:24:01.215 Directives: Not Supported 00:24:01.215 NVMe-MI: Not Supported 00:24:01.215 Virtualization Management: Not Supported 00:24:01.215 Doorbell Buffer Config: Not Supported 00:24:01.215 Get LBA Status Capability: Not Supported 00:24:01.215 Command & Feature Lockdown Capability: Not Supported 00:24:01.215 Abort Command Limit: 1 00:24:01.215 Async Event Request Limit: 4 00:24:01.215 Number of Firmware Slots: N/A 00:24:01.215 Firmware Slot 1 Read-Only: N/A 00:24:01.215 Firmware Activation Without Reset: N/A 00:24:01.215 Multiple Update Detection Support: N/A 00:24:01.215 Firmware Update Granularity: No Information Provided 00:24:01.215 Per-Namespace SMART Log: No 00:24:01.215 Asymmetric Namespace Access Log Page: Not Supported 00:24:01.215 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:01.215 Command Effects Log Page: Not Supported 00:24:01.215 Get Log Page Extended Data: Supported 00:24:01.216 Telemetry Log Pages: Not Supported 00:24:01.216 Persistent Event Log Pages: Not Supported 00:24:01.216 Supported Log Pages Log Page: May Support 00:24:01.216 Commands Supported & Effects Log Page: Not Supported 00:24:01.216 Feature Identifiers & Effects Log Page:May Support 00:24:01.216 NVMe-MI Commands & Effects Log Page: May Support 00:24:01.216 Data Area 4 for Telemetry Log: Not Supported 00:24:01.216 Error Log Page Entries Supported: 128 00:24:01.216 Keep Alive: Not Supported 00:24:01.216 00:24:01.216 NVM Command Set Attributes 00:24:01.216 ========================== 00:24:01.216 Submission Queue Entry Size 00:24:01.216 Max: 1 00:24:01.216 Min: 1 00:24:01.216 Completion Queue Entry Size 00:24:01.216 Max: 1 00:24:01.216 Min: 1 00:24:01.216 Number of Namespaces: 0 00:24:01.216 Compare Command: Not Supported 00:24:01.216 Write Uncorrectable Command: Not Supported 00:24:01.216 Dataset Management Command: Not Supported 00:24:01.216 Write Zeroes Command: Not Supported 00:24:01.216 Set Features Save Field: Not Supported 00:24:01.216 Reservations: Not Supported 00:24:01.216 Timestamp: Not Supported 00:24:01.216 Copy: Not Supported 00:24:01.216 Volatile Write Cache: Not Present 00:24:01.216 Atomic Write Unit (Normal): 1 00:24:01.216 Atomic Write Unit (PFail): 1 00:24:01.216 Atomic Compare & Write Unit: 1 00:24:01.216 Fused Compare & Write: Supported 00:24:01.216 Scatter-Gather List 00:24:01.216 SGL Command Set: Supported 00:24:01.216 SGL Keyed: Supported 00:24:01.216 SGL Bit Bucket Descriptor: Not Supported 00:24:01.216 SGL Metadata Pointer: Not Supported 00:24:01.216 Oversized SGL: Not Supported 00:24:01.216 SGL Metadata Address: Not Supported 00:24:01.216 SGL Offset: Supported 00:24:01.216 Transport SGL Data Block: Not Supported 00:24:01.216 Replay Protected Memory Block: Not Supported 00:24:01.216 00:24:01.216 Firmware Slot Information 00:24:01.216 ========================= 00:24:01.216 Active slot: 0 00:24:01.216 00:24:01.216 00:24:01.216 Error Log 00:24:01.216 ========= 00:24:01.216 00:24:01.216 Active Namespaces 00:24:01.216 ================= 00:24:01.216 Discovery Log Page 00:24:01.216 ================== 00:24:01.216 Generation Counter: 2 00:24:01.216 Number of Records: 2 00:24:01.216 Record Format: 0 00:24:01.216 00:24:01.216 Discovery Log Entry 0 00:24:01.216 ---------------------- 00:24:01.216 Transport Type: 3 (TCP) 00:24:01.216 Address Family: 1 (IPv4) 00:24:01.216 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:01.216 Entry Flags: 00:24:01.216 Duplicate Returned Information: 1 00:24:01.216 Explicit Persistent Connection Support for Discovery: 1 00:24:01.216 Transport Requirements: 00:24:01.216 Secure Channel: Not Required 00:24:01.216 Port ID: 0 (0x0000) 00:24:01.216 Controller ID: 65535 (0xffff) 00:24:01.216 Admin Max SQ Size: 128 00:24:01.216 Transport Service Identifier: 4420 00:24:01.216 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:01.216 Transport Address: 10.0.0.2 00:24:01.216 Discovery Log Entry 1 00:24:01.216 ---------------------- 00:24:01.216 Transport Type: 3 (TCP) 00:24:01.216 Address Family: 1 (IPv4) 00:24:01.216 Subsystem Type: 2 (NVM Subsystem) 00:24:01.216 Entry Flags: 00:24:01.216 Duplicate Returned Information: 0 00:24:01.216 Explicit Persistent Connection Support for Discovery: 0 00:24:01.216 Transport Requirements: 00:24:01.216 Secure Channel: Not Required 00:24:01.216 Port ID: 0 (0x0000) 00:24:01.216 Controller ID: 65535 (0xffff) 00:24:01.216 Admin Max SQ Size: 128 00:24:01.216 Transport Service Identifier: 4420 00:24:01.216 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:01.216 Transport Address: 10.0.0.2 [2024-07-12 11:02:18.028505] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:01.216 [2024-07-12 11:02:18.028518] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1862e40) on tqpair=0x17dfec0 00:24:01.216 [2024-07-12 11:02:18.028525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.216 [2024-07-12 11:02:18.028531] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1862fc0) on tqpair=0x17dfec0 00:24:01.216 [2024-07-12 11:02:18.028536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.216 [2024-07-12 11:02:18.028541] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1863140) on tqpair=0x17dfec0 00:24:01.216 [2024-07-12 11:02:18.028545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.216 [2024-07-12 11:02:18.028550] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18632c0) on tqpair=0x17dfec0 00:24:01.216 [2024-07-12 11:02:18.028554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.216 [2024-07-12 11:02:18.028567] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.216 [2024-07-12 11:02:18.028571] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.216 [2024-07-12 11:02:18.028577] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17dfec0) 00:24:01.216 [2024-07-12 11:02:18.028585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.216 [2024-07-12 11:02:18.028603] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18632c0, cid 3, qid 0 00:24:01.216 [2024-07-12 11:02:18.028865] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.216 [2024-07-12 11:02:18.028871] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.216 [2024-07-12 11:02:18.028875] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.216 [2024-07-12 11:02:18.028879] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18632c0) on tqpair=0x17dfec0 00:24:01.216 [2024-07-12 11:02:18.028887] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.216 [2024-07-12 11:02:18.028891] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.216 [2024-07-12 11:02:18.028894] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17dfec0) 00:24:01.216 [2024-07-12 11:02:18.028901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.216 [2024-07-12 11:02:18.028915] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18632c0, cid 3, qid 0 00:24:01.216 [2024-07-12 11:02:18.033134] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.216 [2024-07-12 11:02:18.033142] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.216 [2024-07-12 11:02:18.033145] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.216 [2024-07-12 11:02:18.033149] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18632c0) on tqpair=0x17dfec0 00:24:01.216 [2024-07-12 11:02:18.033155] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:01.216 [2024-07-12 11:02:18.033159] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:01.216 [2024-07-12 11:02:18.033170] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.216 [2024-07-12 11:02:18.033174] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.216 [2024-07-12 11:02:18.033177] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17dfec0) 00:24:01.216 [2024-07-12 11:02:18.033184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.216 [2024-07-12 11:02:18.033196] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18632c0, cid 3, qid 0 00:24:01.216 [2024-07-12 11:02:18.033419] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.216 [2024-07-12 11:02:18.033426] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.216 [2024-07-12 11:02:18.033429] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.216 [2024-07-12 11:02:18.033433] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18632c0) on tqpair=0x17dfec0 00:24:01.216 [2024-07-12 11:02:18.033442] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 0 milliseconds 00:24:01.216 00:24:01.216 11:02:18 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:01.216 [2024-07-12 11:02:18.080425] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:01.216 [2024-07-12 11:02:18.080472] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2191322 ] 00:24:01.216 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.216 [2024-07-12 11:02:18.114084] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:01.216 [2024-07-12 11:02:18.118144] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:01.216 [2024-07-12 11:02:18.118151] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:01.216 [2024-07-12 11:02:18.118164] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:01.216 [2024-07-12 11:02:18.118171] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:01.216 [2024-07-12 11:02:18.118700] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:01.216 [2024-07-12 11:02:18.118730] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xad4ec0 0 00:24:01.216 [2024-07-12 11:02:18.133130] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:01.216 [2024-07-12 11:02:18.133144] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:01.216 [2024-07-12 11:02:18.133148] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:01.216 [2024-07-12 11:02:18.133152] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:01.216 [2024-07-12 11:02:18.133190] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.216 [2024-07-12 11:02:18.133195] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.216 [2024-07-12 11:02:18.133199] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad4ec0) 00:24:01.216 [2024-07-12 11:02:18.133214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:01.217 [2024-07-12 11:02:18.133233] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb57e40, cid 0, qid 0 00:24:01.217 [2024-07-12 11:02:18.141133] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.217 [2024-07-12 11:02:18.141143] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.217 [2024-07-12 11:02:18.141146] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.217 [2024-07-12 11:02:18.141151] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb57e40) on tqpair=0xad4ec0 00:24:01.217 [2024-07-12 11:02:18.141163] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:01.217 [2024-07-12 11:02:18.141170] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:01.217 [2024-07-12 11:02:18.141175] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:01.217 [2024-07-12 11:02:18.141189] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.217 [2024-07-12 11:02:18.141193] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.217 [2024-07-12 11:02:18.141196] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad4ec0) 00:24:01.217 [2024-07-12 11:02:18.141205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.217 [2024-07-12 11:02:18.141219] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb57e40, cid 0, qid 0 00:24:01.217 [2024-07-12 11:02:18.141456] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.217 [2024-07-12 11:02:18.141463] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.217 [2024-07-12 11:02:18.141467] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.217 [2024-07-12 11:02:18.141471] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb57e40) on tqpair=0xad4ec0 00:24:01.217 [2024-07-12 11:02:18.141476] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:01.217 [2024-07-12 11:02:18.141484] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:01.217 [2024-07-12 11:02:18.141491] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.217 [2024-07-12 11:02:18.141499] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.217 [2024-07-12 11:02:18.141503] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad4ec0) 00:24:01.217 [2024-07-12 11:02:18.141510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.217 [2024-07-12 11:02:18.141522] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb57e40, cid 0, qid 0 00:24:01.217 [2024-07-12 11:02:18.141752] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.217 [2024-07-12 11:02:18.141758] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.217 [2024-07-12 11:02:18.141761] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.217 [2024-07-12 11:02:18.141765] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb57e40) on tqpair=0xad4ec0 00:24:01.217 [2024-07-12 11:02:18.141770] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:01.217 [2024-07-12 11:02:18.141779] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:01.217 [2024-07-12 11:02:18.141785] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.217 [2024-07-12 11:02:18.141789] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.217 [2024-07-12 11:02:18.141792] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad4ec0) 00:24:01.217 [2024-07-12 11:02:18.141799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.217 [2024-07-12 11:02:18.141809] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb57e40, cid 0, qid 0 00:24:01.217 [2024-07-12 11:02:18.142012] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.217 [2024-07-12 11:02:18.142018] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.217 [2024-07-12 11:02:18.142022] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.217 [2024-07-12 11:02:18.142025] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb57e40) on tqpair=0xad4ec0 00:24:01.217 [2024-07-12 11:02:18.142030] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:01.217 [2024-07-12 11:02:18.142040] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.217 [2024-07-12 11:02:18.142044] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.217 [2024-07-12 11:02:18.142047] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad4ec0) 00:24:01.217 [2024-07-12 11:02:18.142054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.217 [2024-07-12 11:02:18.142064] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb57e40, cid 0, qid 0 00:24:01.217 [2024-07-12 11:02:18.142259] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.217 [2024-07-12 11:02:18.142266] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.217 [2024-07-12 11:02:18.142269] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.217 [2024-07-12 11:02:18.142273] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb57e40) on tqpair=0xad4ec0 00:24:01.217 [2024-07-12 11:02:18.142277] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:01.217 [2024-07-12 11:02:18.142282] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:01.217 [2024-07-12 11:02:18.142290] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:01.217 [2024-07-12 11:02:18.142395] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:01.217 [2024-07-12 11:02:18.142402] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:01.217 [2024-07-12 11:02:18.142410] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.217 [2024-07-12 11:02:18.142414] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.217 [2024-07-12 11:02:18.142417] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad4ec0) 00:24:01.217 [2024-07-12 11:02:18.142424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.217 [2024-07-12 11:02:18.142435] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb57e40, cid 0, qid 0 00:24:01.217 [2024-07-12 11:02:18.142677] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.217 [2024-07-12 11:02:18.142684] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.217 [2024-07-12 11:02:18.142687] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.217 [2024-07-12 11:02:18.142691] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb57e40) on tqpair=0xad4ec0 00:24:01.217 [2024-07-12 11:02:18.142696] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:01.217 [2024-07-12 11:02:18.142705] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.217 [2024-07-12 11:02:18.142709] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.217 [2024-07-12 11:02:18.142712] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad4ec0) 00:24:01.217 [2024-07-12 11:02:18.142719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.217 [2024-07-12 11:02:18.142728] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb57e40, cid 0, qid 0 00:24:01.217 [2024-07-12 11:02:18.142935] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.217 [2024-07-12 11:02:18.142941] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.217 [2024-07-12 11:02:18.142944] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.217 [2024-07-12 11:02:18.142948] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb57e40) on tqpair=0xad4ec0 00:24:01.217 [2024-07-12 11:02:18.142952] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:01.217 [2024-07-12 11:02:18.142957] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:01.217 [2024-07-12 11:02:18.142965] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:01.217 [2024-07-12 11:02:18.142973] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:01.217 [2024-07-12 11:02:18.142983] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.217 [2024-07-12 11:02:18.142986] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad4ec0) 00:24:01.217 [2024-07-12 11:02:18.142993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.217 [2024-07-12 11:02:18.143003] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb57e40, cid 0, qid 0 00:24:01.217 [2024-07-12 11:02:18.143312] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:01.217 [2024-07-12 11:02:18.143319] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:01.217 [2024-07-12 11:02:18.143323] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:01.217 [2024-07-12 11:02:18.143327] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xad4ec0): datao=0, datal=4096, cccid=0 00:24:01.217 [2024-07-12 11:02:18.143332] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb57e40) on tqpair(0xad4ec0): expected_datao=0, payload_size=4096 00:24:01.217 [2024-07-12 11:02:18.143339] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.217 [2024-07-12 11:02:18.143347] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:01.217 [2024-07-12 11:02:18.143351] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:01.217 [2024-07-12 11:02:18.143514] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.217 [2024-07-12 11:02:18.143521] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.217 [2024-07-12 11:02:18.143524] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.217 [2024-07-12 11:02:18.143528] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb57e40) on tqpair=0xad4ec0 00:24:01.217 [2024-07-12 11:02:18.143536] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:01.217 [2024-07-12 11:02:18.143544] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:01.217 [2024-07-12 11:02:18.143549] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:01.217 [2024-07-12 11:02:18.143553] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:01.217 [2024-07-12 11:02:18.143558] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:01.217 [2024-07-12 11:02:18.143562] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:01.217 [2024-07-12 11:02:18.143571] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:01.217 [2024-07-12 11:02:18.143578] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.217 [2024-07-12 11:02:18.143582] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.217 [2024-07-12 11:02:18.143585] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad4ec0) 00:24:01.217 [2024-07-12 11:02:18.143593] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:01.217 [2024-07-12 11:02:18.143604] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb57e40, cid 0, qid 0 00:24:01.217 [2024-07-12 11:02:18.143841] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.218 [2024-07-12 11:02:18.143847] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.218 [2024-07-12 11:02:18.143851] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.218 [2024-07-12 11:02:18.143855] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb57e40) on tqpair=0xad4ec0 00:24:01.218 [2024-07-12 11:02:18.143862] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.218 [2024-07-12 11:02:18.143866] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.218 [2024-07-12 11:02:18.143869] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad4ec0) 00:24:01.218 [2024-07-12 11:02:18.143875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.218 [2024-07-12 11:02:18.143882] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.218 [2024-07-12 11:02:18.143885] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.218 [2024-07-12 11:02:18.143889] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xad4ec0) 00:24:01.218 [2024-07-12 11:02:18.143895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.218 [2024-07-12 11:02:18.143901] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.218 [2024-07-12 11:02:18.143904] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.218 [2024-07-12 11:02:18.143908] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xad4ec0) 00:24:01.218 [2024-07-12 11:02:18.143913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.218 [2024-07-12 11:02:18.143922] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.218 [2024-07-12 11:02:18.143926] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.218 [2024-07-12 11:02:18.143929] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad4ec0) 00:24:01.218 [2024-07-12 11:02:18.143935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.218 [2024-07-12 11:02:18.143939] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:01.218 [2024-07-12 11:02:18.143950] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:01.218 [2024-07-12 11:02:18.143956] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.218 [2024-07-12 11:02:18.143960] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xad4ec0) 00:24:01.218 [2024-07-12 11:02:18.143967] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.218 [2024-07-12 11:02:18.143979] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb57e40, cid 0, qid 0 00:24:01.218 [2024-07-12 11:02:18.143984] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb57fc0, cid 1, qid 0 00:24:01.218 [2024-07-12 11:02:18.143989] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb58140, cid 2, qid 0 00:24:01.218 [2024-07-12 11:02:18.143993] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb582c0, cid 3, qid 0 00:24:01.218 [2024-07-12 11:02:18.143998] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb58440, cid 4, qid 0 00:24:01.218 [2024-07-12 11:02:18.144262] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.218 [2024-07-12 11:02:18.144269] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.218 [2024-07-12 11:02:18.144273] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.218 [2024-07-12 11:02:18.144276] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb58440) on tqpair=0xad4ec0 00:24:01.218 [2024-07-12 11:02:18.144281] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:01.218 [2024-07-12 11:02:18.144286] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:01.218 [2024-07-12 11:02:18.144295] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:01.218 [2024-07-12 11:02:18.144302] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:01.218 [2024-07-12 11:02:18.144308] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.218 [2024-07-12 11:02:18.144312] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.218 [2024-07-12 11:02:18.144315] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xad4ec0) 00:24:01.218 [2024-07-12 11:02:18.144322] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:01.218 [2024-07-12 11:02:18.144332] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb58440, cid 4, qid 0 00:24:01.218 [2024-07-12 11:02:18.144567] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.218 [2024-07-12 11:02:18.144574] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.218 [2024-07-12 11:02:18.144577] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.218 [2024-07-12 11:02:18.144581] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb58440) on tqpair=0xad4ec0 00:24:01.218 [2024-07-12 11:02:18.144644] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:01.218 [2024-07-12 11:02:18.144656] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:01.218 [2024-07-12 11:02:18.144665] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.218 [2024-07-12 11:02:18.144669] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xad4ec0) 00:24:01.218 [2024-07-12 11:02:18.144675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.218 [2024-07-12 11:02:18.144686] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb58440, cid 4, qid 0 00:24:01.218 [2024-07-12 11:02:18.144930] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:01.218 [2024-07-12 11:02:18.144937] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:01.218 [2024-07-12 11:02:18.144940] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:01.218 [2024-07-12 11:02:18.144944] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xad4ec0): datao=0, datal=4096, cccid=4 00:24:01.218 [2024-07-12 11:02:18.144948] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb58440) on tqpair(0xad4ec0): expected_datao=0, payload_size=4096 00:24:01.218 [2024-07-12 11:02:18.144953] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.218 [2024-07-12 11:02:18.145056] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:01.218 [2024-07-12 11:02:18.145059] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:01.218 [2024-07-12 11:02:18.149131] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.218 [2024-07-12 11:02:18.149139] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.218 [2024-07-12 11:02:18.149142] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.218 [2024-07-12 11:02:18.149146] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb58440) on tqpair=0xad4ec0 00:24:01.218 [2024-07-12 11:02:18.149158] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:01.218 [2024-07-12 11:02:18.149175] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:01.218 [2024-07-12 11:02:18.149185] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:01.218 [2024-07-12 11:02:18.149192] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.218 [2024-07-12 11:02:18.149196] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xad4ec0) 00:24:01.218 [2024-07-12 11:02:18.149203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.218 [2024-07-12 11:02:18.149216] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb58440, cid 4, qid 0 00:24:01.218 [2024-07-12 11:02:18.149461] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:01.218 [2024-07-12 11:02:18.149467] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:01.218 [2024-07-12 11:02:18.149471] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:01.218 [2024-07-12 11:02:18.149474] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xad4ec0): datao=0, datal=4096, cccid=4 00:24:01.218 [2024-07-12 11:02:18.149479] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb58440) on tqpair(0xad4ec0): expected_datao=0, payload_size=4096 00:24:01.218 [2024-07-12 11:02:18.149483] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.218 [2024-07-12 11:02:18.149490] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:01.218 [2024-07-12 11:02:18.149494] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:01.218 [2024-07-12 11:02:18.149654] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.218 [2024-07-12 11:02:18.149660] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.218 [2024-07-12 11:02:18.149664] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.218 [2024-07-12 11:02:18.149671] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb58440) on tqpair=0xad4ec0 00:24:01.218 [2024-07-12 11:02:18.149686] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:01.218 [2024-07-12 11:02:18.149695] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:01.218 [2024-07-12 11:02:18.149702] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.218 [2024-07-12 11:02:18.149706] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xad4ec0) 00:24:01.218 [2024-07-12 11:02:18.149712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.218 [2024-07-12 11:02:18.149723] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb58440, cid 4, qid 0 00:24:01.218 [2024-07-12 11:02:18.149920] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:01.218 [2024-07-12 11:02:18.149927] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:01.218 [2024-07-12 11:02:18.149930] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:01.218 [2024-07-12 11:02:18.149934] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xad4ec0): datao=0, datal=4096, cccid=4 00:24:01.218 [2024-07-12 11:02:18.149938] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb58440) on tqpair(0xad4ec0): expected_datao=0, payload_size=4096 00:24:01.218 [2024-07-12 11:02:18.149942] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.218 [2024-07-12 11:02:18.150051] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:01.218 [2024-07-12 11:02:18.150055] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:01.218 [2024-07-12 11:02:18.150230] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.218 [2024-07-12 11:02:18.150236] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.218 [2024-07-12 11:02:18.150240] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.218 [2024-07-12 11:02:18.150244] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb58440) on tqpair=0xad4ec0 00:24:01.218 [2024-07-12 11:02:18.150252] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:01.218 [2024-07-12 11:02:18.150260] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:01.218 [2024-07-12 11:02:18.150273] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:01.218 [2024-07-12 11:02:18.150281] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:01.219 [2024-07-12 11:02:18.150286] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:01.219 [2024-07-12 11:02:18.150291] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:01.219 [2024-07-12 11:02:18.150297] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:01.219 [2024-07-12 11:02:18.150301] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:01.219 [2024-07-12 11:02:18.150306] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:01.219 [2024-07-12 11:02:18.150325] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.219 [2024-07-12 11:02:18.150329] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xad4ec0) 00:24:01.219 [2024-07-12 11:02:18.150335] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.219 [2024-07-12 11:02:18.150345] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.219 [2024-07-12 11:02:18.150349] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.219 [2024-07-12 11:02:18.150352] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xad4ec0) 00:24:01.219 [2024-07-12 11:02:18.150358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.219 [2024-07-12 11:02:18.150373] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb58440, cid 4, qid 0 00:24:01.219 [2024-07-12 11:02:18.150378] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb585c0, cid 5, qid 0 00:24:01.219 [2024-07-12 11:02:18.150599] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.219 [2024-07-12 11:02:18.150605] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.219 [2024-07-12 11:02:18.150608] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.219 [2024-07-12 11:02:18.150612] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb58440) on tqpair=0xad4ec0 00:24:01.219 [2024-07-12 11:02:18.150619] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.219 [2024-07-12 11:02:18.150625] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.219 [2024-07-12 11:02:18.150628] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.219 [2024-07-12 11:02:18.150632] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb585c0) on tqpair=0xad4ec0 00:24:01.219 [2024-07-12 11:02:18.150641] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.219 [2024-07-12 11:02:18.150644] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xad4ec0) 00:24:01.219 [2024-07-12 11:02:18.150651] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.219 [2024-07-12 11:02:18.150661] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb585c0, cid 5, qid 0 00:24:01.219 [2024-07-12 11:02:18.150854] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.219 [2024-07-12 11:02:18.150860] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.219 [2024-07-12 11:02:18.150864] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.219 [2024-07-12 11:02:18.150868] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb585c0) on tqpair=0xad4ec0 00:24:01.219 [2024-07-12 11:02:18.150877] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.219 [2024-07-12 11:02:18.150880] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xad4ec0) 00:24:01.219 [2024-07-12 11:02:18.150886] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.219 [2024-07-12 11:02:18.150896] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb585c0, cid 5, qid 0 00:24:01.219 [2024-07-12 11:02:18.151083] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.219 [2024-07-12 11:02:18.151089] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.219 [2024-07-12 11:02:18.151093] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.219 [2024-07-12 11:02:18.151097] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb585c0) on tqpair=0xad4ec0 00:24:01.219 [2024-07-12 11:02:18.151106] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.219 [2024-07-12 11:02:18.151109] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xad4ec0) 00:24:01.219 [2024-07-12 11:02:18.151115] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.219 [2024-07-12 11:02:18.151130] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb585c0, cid 5, qid 0 00:24:01.219 [2024-07-12 11:02:18.151454] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.219 [2024-07-12 11:02:18.151460] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.219 [2024-07-12 11:02:18.151466] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.219 [2024-07-12 11:02:18.151470] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb585c0) on tqpair=0xad4ec0 00:24:01.219 [2024-07-12 11:02:18.151486] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.219 [2024-07-12 11:02:18.151490] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xad4ec0) 00:24:01.219 [2024-07-12 11:02:18.151496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.219 [2024-07-12 11:02:18.151503] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.219 [2024-07-12 11:02:18.151507] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xad4ec0) 00:24:01.219 [2024-07-12 11:02:18.151513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.219 [2024-07-12 11:02:18.151520] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.219 [2024-07-12 11:02:18.151524] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xad4ec0) 00:24:01.219 [2024-07-12 11:02:18.151529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.219 [2024-07-12 11:02:18.151537] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.219 [2024-07-12 11:02:18.151541] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xad4ec0) 00:24:01.219 [2024-07-12 11:02:18.151547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.219 [2024-07-12 11:02:18.151559] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb585c0, cid 5, qid 0 00:24:01.219 [2024-07-12 11:02:18.151564] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb58440, cid 4, qid 0 00:24:01.219 [2024-07-12 11:02:18.151568] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb58740, cid 6, qid 0 00:24:01.219 [2024-07-12 11:02:18.151573] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb588c0, cid 7, qid 0 00:24:01.219 [2024-07-12 11:02:18.151893] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:01.219 [2024-07-12 11:02:18.151900] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:01.219 [2024-07-12 11:02:18.151903] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:01.219 [2024-07-12 11:02:18.151906] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xad4ec0): datao=0, datal=8192, cccid=5 00:24:01.219 [2024-07-12 11:02:18.151911] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb585c0) on tqpair(0xad4ec0): expected_datao=0, payload_size=8192 00:24:01.219 [2024-07-12 11:02:18.151915] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.219 [2024-07-12 11:02:18.151963] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:01.219 [2024-07-12 11:02:18.151967] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:01.219 [2024-07-12 11:02:18.151973] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:01.219 [2024-07-12 11:02:18.151979] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:01.219 [2024-07-12 11:02:18.151982] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:01.219 [2024-07-12 11:02:18.151985] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xad4ec0): datao=0, datal=512, cccid=4 00:24:01.219 [2024-07-12 11:02:18.151990] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb58440) on tqpair(0xad4ec0): expected_datao=0, payload_size=512 00:24:01.219 [2024-07-12 11:02:18.151994] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.219 [2024-07-12 11:02:18.152000] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:01.219 [2024-07-12 11:02:18.152004] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:01.219 [2024-07-12 11:02:18.152011] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:01.219 [2024-07-12 11:02:18.152017] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:01.219 [2024-07-12 11:02:18.152021] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:01.219 [2024-07-12 11:02:18.152024] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xad4ec0): datao=0, datal=512, cccid=6 00:24:01.219 [2024-07-12 11:02:18.152028] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb58740) on tqpair(0xad4ec0): expected_datao=0, payload_size=512 00:24:01.219 [2024-07-12 11:02:18.152032] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.219 [2024-07-12 11:02:18.152039] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:01.219 [2024-07-12 11:02:18.152042] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:01.219 [2024-07-12 11:02:18.152048] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:01.219 [2024-07-12 11:02:18.152054] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:01.219 [2024-07-12 11:02:18.152057] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:01.219 [2024-07-12 11:02:18.152060] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xad4ec0): datao=0, datal=4096, cccid=7 00:24:01.220 [2024-07-12 11:02:18.152065] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb588c0) on tqpair(0xad4ec0): expected_datao=0, payload_size=4096 00:24:01.220 [2024-07-12 11:02:18.152069] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.220 [2024-07-12 11:02:18.152075] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:01.220 [2024-07-12 11:02:18.152079] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:01.220 [2024-07-12 11:02:18.152109] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.220 [2024-07-12 11:02:18.152115] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.220 [2024-07-12 11:02:18.152118] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.220 [2024-07-12 11:02:18.152126] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb585c0) on tqpair=0xad4ec0 00:24:01.220 [2024-07-12 11:02:18.152139] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.220 [2024-07-12 11:02:18.152145] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.220 [2024-07-12 11:02:18.152148] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.220 [2024-07-12 11:02:18.152152] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb58440) on tqpair=0xad4ec0 00:24:01.220 [2024-07-12 11:02:18.152163] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.220 [2024-07-12 11:02:18.152169] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.220 [2024-07-12 11:02:18.152172] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.220 [2024-07-12 11:02:18.152176] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb58740) on tqpair=0xad4ec0 00:24:01.220 [2024-07-12 11:02:18.152183] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.220 [2024-07-12 11:02:18.152189] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.220 [2024-07-12 11:02:18.152192] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.220 [2024-07-12 11:02:18.152196] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb588c0) on tqpair=0xad4ec0 00:24:01.220 ===================================================== 00:24:01.220 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:01.220 ===================================================== 00:24:01.220 Controller Capabilities/Features 00:24:01.220 ================================ 00:24:01.220 Vendor ID: 8086 00:24:01.220 Subsystem Vendor ID: 8086 00:24:01.220 Serial Number: SPDK00000000000001 00:24:01.220 Model Number: SPDK bdev Controller 00:24:01.220 Firmware Version: 24.09 00:24:01.220 Recommended Arb Burst: 6 00:24:01.220 IEEE OUI Identifier: e4 d2 5c 00:24:01.220 Multi-path I/O 00:24:01.220 May have multiple subsystem ports: Yes 00:24:01.220 May have multiple controllers: Yes 00:24:01.220 Associated with SR-IOV VF: No 00:24:01.220 Max Data Transfer Size: 131072 00:24:01.220 Max Number of Namespaces: 32 00:24:01.220 Max Number of I/O Queues: 127 00:24:01.220 NVMe Specification Version (VS): 1.3 00:24:01.220 NVMe Specification Version (Identify): 1.3 00:24:01.220 Maximum Queue Entries: 128 00:24:01.220 Contiguous Queues Required: Yes 00:24:01.220 Arbitration Mechanisms Supported 00:24:01.220 Weighted Round Robin: Not Supported 00:24:01.220 Vendor Specific: Not Supported 00:24:01.220 Reset Timeout: 15000 ms 00:24:01.220 Doorbell Stride: 4 bytes 00:24:01.220 NVM Subsystem Reset: Not Supported 00:24:01.220 Command Sets Supported 00:24:01.220 NVM Command Set: Supported 00:24:01.220 Boot Partition: Not Supported 00:24:01.220 Memory Page Size Minimum: 4096 bytes 00:24:01.220 Memory Page Size Maximum: 4096 bytes 00:24:01.220 Persistent Memory Region: Not Supported 00:24:01.220 Optional Asynchronous Events Supported 00:24:01.220 Namespace Attribute Notices: Supported 00:24:01.220 Firmware Activation Notices: Not Supported 00:24:01.220 ANA Change Notices: Not Supported 00:24:01.220 PLE Aggregate Log Change Notices: Not Supported 00:24:01.220 LBA Status Info Alert Notices: Not Supported 00:24:01.220 EGE Aggregate Log Change Notices: Not Supported 00:24:01.220 Normal NVM Subsystem Shutdown event: Not Supported 00:24:01.220 Zone Descriptor Change Notices: Not Supported 00:24:01.220 Discovery Log Change Notices: Not Supported 00:24:01.220 Controller Attributes 00:24:01.220 128-bit Host Identifier: Supported 00:24:01.220 Non-Operational Permissive Mode: Not Supported 00:24:01.220 NVM Sets: Not Supported 00:24:01.220 Read Recovery Levels: Not Supported 00:24:01.220 Endurance Groups: Not Supported 00:24:01.220 Predictable Latency Mode: Not Supported 00:24:01.220 Traffic Based Keep ALive: Not Supported 00:24:01.220 Namespace Granularity: Not Supported 00:24:01.220 SQ Associations: Not Supported 00:24:01.220 UUID List: Not Supported 00:24:01.220 Multi-Domain Subsystem: Not Supported 00:24:01.220 Fixed Capacity Management: Not Supported 00:24:01.220 Variable Capacity Management: Not Supported 00:24:01.220 Delete Endurance Group: Not Supported 00:24:01.220 Delete NVM Set: Not Supported 00:24:01.220 Extended LBA Formats Supported: Not Supported 00:24:01.220 Flexible Data Placement Supported: Not Supported 00:24:01.220 00:24:01.220 Controller Memory Buffer Support 00:24:01.220 ================================ 00:24:01.220 Supported: No 00:24:01.220 00:24:01.220 Persistent Memory Region Support 00:24:01.220 ================================ 00:24:01.220 Supported: No 00:24:01.220 00:24:01.220 Admin Command Set Attributes 00:24:01.220 ============================ 00:24:01.220 Security Send/Receive: Not Supported 00:24:01.220 Format NVM: Not Supported 00:24:01.220 Firmware Activate/Download: Not Supported 00:24:01.220 Namespace Management: Not Supported 00:24:01.220 Device Self-Test: Not Supported 00:24:01.220 Directives: Not Supported 00:24:01.220 NVMe-MI: Not Supported 00:24:01.220 Virtualization Management: Not Supported 00:24:01.220 Doorbell Buffer Config: Not Supported 00:24:01.220 Get LBA Status Capability: Not Supported 00:24:01.220 Command & Feature Lockdown Capability: Not Supported 00:24:01.220 Abort Command Limit: 4 00:24:01.220 Async Event Request Limit: 4 00:24:01.220 Number of Firmware Slots: N/A 00:24:01.220 Firmware Slot 1 Read-Only: N/A 00:24:01.220 Firmware Activation Without Reset: N/A 00:24:01.220 Multiple Update Detection Support: N/A 00:24:01.220 Firmware Update Granularity: No Information Provided 00:24:01.220 Per-Namespace SMART Log: No 00:24:01.220 Asymmetric Namespace Access Log Page: Not Supported 00:24:01.220 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:01.220 Command Effects Log Page: Supported 00:24:01.220 Get Log Page Extended Data: Supported 00:24:01.220 Telemetry Log Pages: Not Supported 00:24:01.220 Persistent Event Log Pages: Not Supported 00:24:01.220 Supported Log Pages Log Page: May Support 00:24:01.220 Commands Supported & Effects Log Page: Not Supported 00:24:01.220 Feature Identifiers & Effects Log Page:May Support 00:24:01.220 NVMe-MI Commands & Effects Log Page: May Support 00:24:01.220 Data Area 4 for Telemetry Log: Not Supported 00:24:01.220 Error Log Page Entries Supported: 128 00:24:01.220 Keep Alive: Supported 00:24:01.220 Keep Alive Granularity: 10000 ms 00:24:01.220 00:24:01.220 NVM Command Set Attributes 00:24:01.220 ========================== 00:24:01.220 Submission Queue Entry Size 00:24:01.220 Max: 64 00:24:01.220 Min: 64 00:24:01.220 Completion Queue Entry Size 00:24:01.220 Max: 16 00:24:01.220 Min: 16 00:24:01.220 Number of Namespaces: 32 00:24:01.220 Compare Command: Supported 00:24:01.220 Write Uncorrectable Command: Not Supported 00:24:01.220 Dataset Management Command: Supported 00:24:01.220 Write Zeroes Command: Supported 00:24:01.220 Set Features Save Field: Not Supported 00:24:01.220 Reservations: Supported 00:24:01.220 Timestamp: Not Supported 00:24:01.220 Copy: Supported 00:24:01.220 Volatile Write Cache: Present 00:24:01.220 Atomic Write Unit (Normal): 1 00:24:01.220 Atomic Write Unit (PFail): 1 00:24:01.220 Atomic Compare & Write Unit: 1 00:24:01.220 Fused Compare & Write: Supported 00:24:01.220 Scatter-Gather List 00:24:01.220 SGL Command Set: Supported 00:24:01.220 SGL Keyed: Supported 00:24:01.220 SGL Bit Bucket Descriptor: Not Supported 00:24:01.220 SGL Metadata Pointer: Not Supported 00:24:01.220 Oversized SGL: Not Supported 00:24:01.220 SGL Metadata Address: Not Supported 00:24:01.220 SGL Offset: Supported 00:24:01.220 Transport SGL Data Block: Not Supported 00:24:01.220 Replay Protected Memory Block: Not Supported 00:24:01.220 00:24:01.220 Firmware Slot Information 00:24:01.220 ========================= 00:24:01.220 Active slot: 1 00:24:01.220 Slot 1 Firmware Revision: 24.09 00:24:01.220 00:24:01.220 00:24:01.220 Commands Supported and Effects 00:24:01.220 ============================== 00:24:01.220 Admin Commands 00:24:01.220 -------------- 00:24:01.220 Get Log Page (02h): Supported 00:24:01.220 Identify (06h): Supported 00:24:01.220 Abort (08h): Supported 00:24:01.220 Set Features (09h): Supported 00:24:01.220 Get Features (0Ah): Supported 00:24:01.220 Asynchronous Event Request (0Ch): Supported 00:24:01.220 Keep Alive (18h): Supported 00:24:01.220 I/O Commands 00:24:01.220 ------------ 00:24:01.220 Flush (00h): Supported LBA-Change 00:24:01.220 Write (01h): Supported LBA-Change 00:24:01.220 Read (02h): Supported 00:24:01.220 Compare (05h): Supported 00:24:01.220 Write Zeroes (08h): Supported LBA-Change 00:24:01.220 Dataset Management (09h): Supported LBA-Change 00:24:01.220 Copy (19h): Supported LBA-Change 00:24:01.220 00:24:01.220 Error Log 00:24:01.220 ========= 00:24:01.220 00:24:01.220 Arbitration 00:24:01.220 =========== 00:24:01.220 Arbitration Burst: 1 00:24:01.220 00:24:01.220 Power Management 00:24:01.220 ================ 00:24:01.220 Number of Power States: 1 00:24:01.221 Current Power State: Power State #0 00:24:01.221 Power State #0: 00:24:01.221 Max Power: 0.00 W 00:24:01.221 Non-Operational State: Operational 00:24:01.221 Entry Latency: Not Reported 00:24:01.221 Exit Latency: Not Reported 00:24:01.221 Relative Read Throughput: 0 00:24:01.221 Relative Read Latency: 0 00:24:01.221 Relative Write Throughput: 0 00:24:01.221 Relative Write Latency: 0 00:24:01.221 Idle Power: Not Reported 00:24:01.221 Active Power: Not Reported 00:24:01.221 Non-Operational Permissive Mode: Not Supported 00:24:01.221 00:24:01.221 Health Information 00:24:01.221 ================== 00:24:01.221 Critical Warnings: 00:24:01.221 Available Spare Space: OK 00:24:01.221 Temperature: OK 00:24:01.221 Device Reliability: OK 00:24:01.221 Read Only: No 00:24:01.221 Volatile Memory Backup: OK 00:24:01.221 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:01.221 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:01.221 Available Spare: 0% 00:24:01.221 Available Spare Threshold: 0% 00:24:01.221 Life Percentage Used:[2024-07-12 11:02:18.152304] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.221 [2024-07-12 11:02:18.152310] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xad4ec0) 00:24:01.221 [2024-07-12 11:02:18.152317] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.221 [2024-07-12 11:02:18.152330] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb588c0, cid 7, qid 0 00:24:01.221 [2024-07-12 11:02:18.152534] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.221 [2024-07-12 11:02:18.152541] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.221 [2024-07-12 11:02:18.152544] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.221 [2024-07-12 11:02:18.152550] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb588c0) on tqpair=0xad4ec0 00:24:01.221 [2024-07-12 11:02:18.152587] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:01.221 [2024-07-12 11:02:18.152597] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb57e40) on tqpair=0xad4ec0 00:24:01.221 [2024-07-12 11:02:18.152603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.221 [2024-07-12 11:02:18.152609] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb57fc0) on tqpair=0xad4ec0 00:24:01.221 [2024-07-12 11:02:18.152613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.221 [2024-07-12 11:02:18.152618] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb58140) on tqpair=0xad4ec0 00:24:01.221 [2024-07-12 11:02:18.152623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.221 [2024-07-12 11:02:18.152627] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb582c0) on tqpair=0xad4ec0 00:24:01.221 [2024-07-12 11:02:18.152632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.221 [2024-07-12 11:02:18.152640] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.221 [2024-07-12 11:02:18.152644] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.221 [2024-07-12 11:02:18.152647] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad4ec0) 00:24:01.221 [2024-07-12 11:02:18.152654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.221 [2024-07-12 11:02:18.152667] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb582c0, cid 3, qid 0 00:24:01.221 [2024-07-12 11:02:18.152879] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.221 [2024-07-12 11:02:18.152885] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.221 [2024-07-12 11:02:18.152888] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.221 [2024-07-12 11:02:18.152892] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb582c0) on tqpair=0xad4ec0 00:24:01.221 [2024-07-12 11:02:18.152899] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.221 [2024-07-12 11:02:18.152902] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.221 [2024-07-12 11:02:18.152906] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad4ec0) 00:24:01.221 [2024-07-12 11:02:18.152912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.221 [2024-07-12 11:02:18.152925] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb582c0, cid 3, qid 0 00:24:01.221 [2024-07-12 11:02:18.157131] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.221 [2024-07-12 11:02:18.157140] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.221 [2024-07-12 11:02:18.157144] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.221 [2024-07-12 11:02:18.157148] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb582c0) on tqpair=0xad4ec0 00:24:01.221 [2024-07-12 11:02:18.157152] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:01.221 [2024-07-12 11:02:18.157157] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:01.221 [2024-07-12 11:02:18.157168] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.221 [2024-07-12 11:02:18.157171] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.221 [2024-07-12 11:02:18.157175] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad4ec0) 00:24:01.221 [2024-07-12 11:02:18.157182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.221 [2024-07-12 11:02:18.157198] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb582c0, cid 3, qid 0 00:24:01.221 [2024-07-12 11:02:18.157402] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.221 [2024-07-12 11:02:18.157408] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.221 [2024-07-12 11:02:18.157411] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.221 [2024-07-12 11:02:18.157415] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb582c0) on tqpair=0xad4ec0 00:24:01.221 [2024-07-12 11:02:18.157423] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 0 milliseconds 00:24:01.221 0% 00:24:01.221 Data Units Read: 0 00:24:01.221 Data Units Written: 0 00:24:01.221 Host Read Commands: 0 00:24:01.221 Host Write Commands: 0 00:24:01.221 Controller Busy Time: 0 minutes 00:24:01.221 Power Cycles: 0 00:24:01.221 Power On Hours: 0 hours 00:24:01.221 Unsafe Shutdowns: 0 00:24:01.221 Unrecoverable Media Errors: 0 00:24:01.221 Lifetime Error Log Entries: 0 00:24:01.221 Warning Temperature Time: 0 minutes 00:24:01.221 Critical Temperature Time: 0 minutes 00:24:01.221 00:24:01.221 Number of Queues 00:24:01.221 ================ 00:24:01.221 Number of I/O Submission Queues: 127 00:24:01.221 Number of I/O Completion Queues: 127 00:24:01.221 00:24:01.221 Active Namespaces 00:24:01.221 ================= 00:24:01.221 Namespace ID:1 00:24:01.221 Error Recovery Timeout: Unlimited 00:24:01.221 Command Set Identifier: NVM (00h) 00:24:01.221 Deallocate: Supported 00:24:01.221 Deallocated/Unwritten Error: Not Supported 00:24:01.221 Deallocated Read Value: Unknown 00:24:01.221 Deallocate in Write Zeroes: Not Supported 00:24:01.221 Deallocated Guard Field: 0xFFFF 00:24:01.221 Flush: Supported 00:24:01.221 Reservation: Supported 00:24:01.221 Namespace Sharing Capabilities: Multiple Controllers 00:24:01.221 Size (in LBAs): 131072 (0GiB) 00:24:01.221 Capacity (in LBAs): 131072 (0GiB) 00:24:01.221 Utilization (in LBAs): 131072 (0GiB) 00:24:01.221 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:01.221 EUI64: ABCDEF0123456789 00:24:01.221 UUID: 7fb02325-6234-4a1d-a102-3ef9757d7d38 00:24:01.221 Thin Provisioning: Not Supported 00:24:01.221 Per-NS Atomic Units: Yes 00:24:01.221 Atomic Boundary Size (Normal): 0 00:24:01.221 Atomic Boundary Size (PFail): 0 00:24:01.221 Atomic Boundary Offset: 0 00:24:01.221 Maximum Single Source Range Length: 65535 00:24:01.221 Maximum Copy Length: 65535 00:24:01.221 Maximum Source Range Count: 1 00:24:01.221 NGUID/EUI64 Never Reused: No 00:24:01.221 Namespace Write Protected: No 00:24:01.221 Number of LBA Formats: 1 00:24:01.221 Current LBA Format: LBA Format #00 00:24:01.221 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:01.221 00:24:01.221 11:02:18 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:01.221 11:02:18 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:01.221 11:02:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.221 11:02:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:01.221 11:02:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.221 11:02:18 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:01.221 11:02:18 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:01.221 11:02:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:01.221 11:02:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:24:01.482 11:02:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:01.482 11:02:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:24:01.482 11:02:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:01.482 11:02:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:01.482 rmmod nvme_tcp 00:24:01.482 rmmod nvme_fabrics 00:24:01.482 rmmod nvme_keyring 00:24:01.482 11:02:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:01.482 11:02:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:24:01.482 11:02:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:24:01.482 11:02:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2191059 ']' 00:24:01.482 11:02:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2191059 00:24:01.482 11:02:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 2191059 ']' 00:24:01.482 11:02:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 2191059 00:24:01.482 11:02:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:24:01.482 11:02:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:01.482 11:02:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2191059 00:24:01.482 11:02:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:01.482 11:02:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:01.482 11:02:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2191059' 00:24:01.482 killing process with pid 2191059 00:24:01.482 11:02:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 2191059 00:24:01.482 11:02:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 2191059 00:24:01.744 11:02:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:01.744 11:02:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:01.744 11:02:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:01.744 11:02:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:01.744 11:02:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:01.744 11:02:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.744 11:02:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:01.744 11:02:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.654 11:02:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:03.654 00:24:03.654 real 0m11.357s 00:24:03.654 user 0m8.168s 00:24:03.654 sys 0m5.984s 00:24:03.654 11:02:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:03.654 11:02:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.654 ************************************ 00:24:03.654 END TEST nvmf_identify 00:24:03.654 ************************************ 00:24:03.914 11:02:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:03.914 11:02:20 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:03.914 11:02:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:03.914 11:02:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:03.914 11:02:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:03.914 ************************************ 00:24:03.914 START TEST nvmf_perf 00:24:03.914 ************************************ 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:03.915 * Looking for test storage... 00:24:03.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:03.915 11:02:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:12.077 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:12.077 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:12.077 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:12.077 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:12.077 11:02:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:12.077 11:02:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:12.077 11:02:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:12.077 11:02:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:12.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:12.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.569 ms 00:24:12.077 00:24:12.077 --- 10.0.0.2 ping statistics --- 00:24:12.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.077 rtt min/avg/max/mdev = 0.569/0.569/0.569/0.000 ms 00:24:12.077 11:02:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:12.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:12.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:24:12.077 00:24:12.077 --- 10.0.0.1 ping statistics --- 00:24:12.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.077 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:24:12.077 11:02:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:12.077 11:02:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:24:12.077 11:02:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:12.077 11:02:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:12.077 11:02:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:12.077 11:02:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:12.077 11:02:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:12.078 11:02:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:12.078 11:02:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:12.078 11:02:28 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:12.078 11:02:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:12.078 11:02:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:12.078 11:02:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:12.078 11:02:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2195632 00:24:12.078 11:02:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2195632 00:24:12.078 11:02:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:12.078 11:02:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 2195632 ']' 00:24:12.078 11:02:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:12.078 11:02:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:12.078 11:02:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:12.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:12.078 11:02:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:12.078 11:02:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:12.078 [2024-07-12 11:02:28.210243] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:12.078 [2024-07-12 11:02:28.210305] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:12.078 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.078 [2024-07-12 11:02:28.296602] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:12.078 [2024-07-12 11:02:28.393250] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:12.078 [2024-07-12 11:02:28.393307] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:12.078 [2024-07-12 11:02:28.393315] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:12.078 [2024-07-12 11:02:28.393322] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:12.078 [2024-07-12 11:02:28.393328] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:12.078 [2024-07-12 11:02:28.393492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.078 [2024-07-12 11:02:28.393629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:12.078 [2024-07-12 11:02:28.393790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.078 [2024-07-12 11:02:28.393792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:12.078 11:02:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:12.078 11:02:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:24:12.078 11:02:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:12.078 11:02:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:12.078 11:02:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:12.078 11:02:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:12.078 11:02:29 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:12.078 11:02:29 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:12.648 11:02:29 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:12.648 11:02:29 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:12.908 11:02:29 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:12.908 11:02:29 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:13.168 11:02:29 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:13.168 11:02:29 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:13.168 11:02:29 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:13.168 11:02:29 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:13.168 11:02:29 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:13.168 [2024-07-12 11:02:30.101910] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:13.168 11:02:30 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:13.427 11:02:30 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:13.427 11:02:30 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:13.686 11:02:30 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:13.686 11:02:30 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:13.946 11:02:30 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:13.946 [2024-07-12 11:02:30.824927] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:13.946 11:02:30 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:14.206 11:02:31 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:14.206 11:02:31 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:14.206 11:02:31 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:14.206 11:02:31 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:15.617 Initializing NVMe Controllers 00:24:15.617 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:15.617 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:15.617 Initialization complete. Launching workers. 00:24:15.617 ======================================================== 00:24:15.617 Latency(us) 00:24:15.617 Device Information : IOPS MiB/s Average min max 00:24:15.617 PCIE (0000:65:00.0) NSID 1 from core 0: 79781.02 311.64 400.65 13.15 4885.85 00:24:15.617 ======================================================== 00:24:15.617 Total : 79781.02 311.64 400.65 13.15 4885.85 00:24:15.617 00:24:15.617 11:02:32 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:15.617 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.998 Initializing NVMe Controllers 00:24:16.998 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:16.998 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:16.998 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:16.998 Initialization complete. Launching workers. 00:24:16.998 ======================================================== 00:24:16.998 Latency(us) 00:24:16.998 Device Information : IOPS MiB/s Average min max 00:24:16.998 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 86.00 0.34 11892.28 232.71 46116.94 00:24:16.998 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 18651.87 7954.85 47903.33 00:24:16.998 ======================================================== 00:24:16.998 Total : 142.00 0.55 14558.03 232.71 47903.33 00:24:16.998 00:24:16.998 11:02:33 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:16.998 EAL: No free 2048 kB hugepages reported on node 1 00:24:18.379 Initializing NVMe Controllers 00:24:18.379 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:18.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:18.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:18.379 Initialization complete. Launching workers. 00:24:18.379 ======================================================== 00:24:18.379 Latency(us) 00:24:18.379 Device Information : IOPS MiB/s Average min max 00:24:18.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13036.00 50.92 2456.60 368.23 6583.01 00:24:18.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3781.00 14.77 8501.75 6567.04 16080.55 00:24:18.379 ======================================================== 00:24:18.379 Total : 16817.00 65.69 3815.74 368.23 16080.55 00:24:18.379 00:24:18.379 11:02:35 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:18.379 11:02:35 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:18.380 11:02:35 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:18.380 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.919 Initializing NVMe Controllers 00:24:20.919 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:20.919 Controller IO queue size 128, less than required. 00:24:20.919 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:20.919 Controller IO queue size 128, less than required. 00:24:20.919 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:20.919 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:20.919 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:20.919 Initialization complete. Launching workers. 00:24:20.919 ======================================================== 00:24:20.919 Latency(us) 00:24:20.919 Device Information : IOPS MiB/s Average min max 00:24:20.919 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1038.33 259.58 126747.40 64326.46 192928.80 00:24:20.919 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 594.41 148.60 223250.04 64905.09 348980.32 00:24:20.919 ======================================================== 00:24:20.919 Total : 1632.74 408.18 161879.56 64326.46 348980.32 00:24:20.919 00:24:20.919 11:02:37 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:20.919 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.919 No valid NVMe controllers or AIO or URING devices found 00:24:20.919 Initializing NVMe Controllers 00:24:20.919 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:20.919 Controller IO queue size 128, less than required. 00:24:20.919 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:20.919 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:20.919 Controller IO queue size 128, less than required. 00:24:20.919 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:20.919 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:20.919 WARNING: Some requested NVMe devices were skipped 00:24:20.919 11:02:37 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:21.178 EAL: No free 2048 kB hugepages reported on node 1 00:24:23.719 Initializing NVMe Controllers 00:24:23.720 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:23.720 Controller IO queue size 128, less than required. 00:24:23.720 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:23.720 Controller IO queue size 128, less than required. 00:24:23.720 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:23.720 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:23.720 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:23.720 Initialization complete. Launching workers. 00:24:23.720 00:24:23.720 ==================== 00:24:23.720 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:23.720 TCP transport: 00:24:23.720 polls: 57648 00:24:23.720 idle_polls: 17598 00:24:23.720 sock_completions: 40050 00:24:23.720 nvme_completions: 4513 00:24:23.720 submitted_requests: 6694 00:24:23.720 queued_requests: 1 00:24:23.720 00:24:23.720 ==================== 00:24:23.720 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:23.720 TCP transport: 00:24:23.720 polls: 57170 00:24:23.720 idle_polls: 19762 00:24:23.720 sock_completions: 37408 00:24:23.720 nvme_completions: 4397 00:24:23.720 submitted_requests: 6556 00:24:23.720 queued_requests: 1 00:24:23.720 ======================================================== 00:24:23.720 Latency(us) 00:24:23.720 Device Information : IOPS MiB/s Average min max 00:24:23.720 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1126.40 281.60 117804.15 65781.22 209881.32 00:24:23.720 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1097.44 274.36 119526.56 46726.97 189772.99 00:24:23.720 ======================================================== 00:24:23.720 Total : 2223.84 555.96 118654.14 46726.97 209881.32 00:24:23.720 00:24:23.720 11:02:40 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:23.720 11:02:40 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:23.720 11:02:40 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:23.720 11:02:40 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:23.720 11:02:40 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:23.720 11:02:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:23.720 11:02:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:23.720 11:02:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:23.720 11:02:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:23.720 11:02:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:23.720 11:02:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:23.720 rmmod nvme_tcp 00:24:23.720 rmmod nvme_fabrics 00:24:23.720 rmmod nvme_keyring 00:24:23.720 11:02:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:23.720 11:02:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:23.720 11:02:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:23.720 11:02:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2195632 ']' 00:24:23.720 11:02:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2195632 00:24:23.720 11:02:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 2195632 ']' 00:24:23.720 11:02:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 2195632 00:24:23.720 11:02:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:24:23.720 11:02:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:23.720 11:02:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2195632 00:24:23.720 11:02:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:23.720 11:02:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:23.720 11:02:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2195632' 00:24:23.720 killing process with pid 2195632 00:24:23.720 11:02:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 2195632 00:24:23.720 11:02:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 2195632 00:24:26.260 11:02:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:26.260 11:02:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:26.260 11:02:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:26.260 11:02:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:26.260 11:02:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:26.260 11:02:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.260 11:02:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:26.260 11:02:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.171 11:02:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:28.171 00:24:28.171 real 0m24.039s 00:24:28.171 user 0m58.768s 00:24:28.171 sys 0m7.930s 00:24:28.171 11:02:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:28.171 11:02:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:28.171 ************************************ 00:24:28.171 END TEST nvmf_perf 00:24:28.171 ************************************ 00:24:28.171 11:02:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:28.171 11:02:44 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:28.171 11:02:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:28.171 11:02:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:28.171 11:02:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:28.171 ************************************ 00:24:28.171 START TEST nvmf_fio_host 00:24:28.171 ************************************ 00:24:28.171 11:02:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:28.171 * Looking for test storage... 00:24:28.171 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:28.171 11:02:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:28.171 11:02:44 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.171 11:02:44 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.171 11:02:44 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.171 11:02:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.171 11:02:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.171 11:02:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.171 11:02:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:28.171 11:02:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.171 11:02:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:28.171 11:02:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:28.171 11:02:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.171 11:02:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.171 11:02:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.171 11:02:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.171 11:02:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.171 11:02:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.171 11:02:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.171 11:02:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.171 11:02:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.171 11:02:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.171 11:02:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:28.171 11:02:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:28.171 11:02:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.171 11:02:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.171 11:02:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:28.171 11:02:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.171 11:02:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:28.171 11:02:44 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.171 11:02:44 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.171 11:02:44 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.171 11:02:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.171 11:02:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.171 11:02:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.171 11:02:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:28.172 11:02:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.172 11:02:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:28.172 11:02:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:28.172 11:02:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:28.172 11:02:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.172 11:02:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.172 11:02:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.172 11:02:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:28.172 11:02:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:28.172 11:02:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:28.172 11:02:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:28.172 11:02:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:28.172 11:02:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:28.172 11:02:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.172 11:02:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:28.172 11:02:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:28.172 11:02:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:28.172 11:02:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.172 11:02:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:28.172 11:02:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.172 11:02:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:28.172 11:02:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:28.172 11:02:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:28.172 11:02:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.312 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:36.312 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:36.312 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:36.312 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:36.312 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:36.312 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:36.312 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:36.312 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:36.312 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:36.312 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:24:36.312 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:36.312 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:24:36.312 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:36.312 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:24:36.312 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:36.312 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:36.312 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:36.312 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:36.312 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:36.312 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:36.312 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:36.312 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:36.313 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:36.313 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:36.313 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:36.313 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:36.313 11:02:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:36.313 11:02:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:36.313 11:02:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:36.313 11:02:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:36.313 11:02:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:36.313 11:02:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:36.313 11:02:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:36.313 11:02:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:36.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:36.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.570 ms 00:24:36.313 00:24:36.313 --- 10.0.0.2 ping statistics --- 00:24:36.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.313 rtt min/avg/max/mdev = 0.570/0.570/0.570/0.000 ms 00:24:36.313 11:02:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:36.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:36.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:24:36.313 00:24:36.313 --- 10.0.0.1 ping statistics --- 00:24:36.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.313 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:24:36.313 11:02:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:36.313 11:02:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:24:36.313 11:02:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:36.313 11:02:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:36.313 11:02:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:36.313 11:02:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:36.313 11:02:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:36.313 11:02:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:36.313 11:02:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:36.313 11:02:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:36.313 11:02:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:36.313 11:02:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:36.313 11:02:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.313 11:02:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2202403 00:24:36.313 11:02:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:36.313 11:02:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:36.313 11:02:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2202403 00:24:36.313 11:02:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 2202403 ']' 00:24:36.313 11:02:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.313 11:02:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:36.313 11:02:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.313 11:02:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:36.313 11:02:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.313 [2024-07-12 11:02:52.381941] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:36.313 [2024-07-12 11:02:52.382003] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:36.313 EAL: No free 2048 kB hugepages reported on node 1 00:24:36.313 [2024-07-12 11:02:52.468039] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:36.313 [2024-07-12 11:02:52.565394] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:36.313 [2024-07-12 11:02:52.565454] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:36.313 [2024-07-12 11:02:52.565462] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:36.313 [2024-07-12 11:02:52.565469] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:36.313 [2024-07-12 11:02:52.565475] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:36.313 [2024-07-12 11:02:52.565640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.313 [2024-07-12 11:02:52.565799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:36.313 [2024-07-12 11:02:52.565962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.313 [2024-07-12 11:02:52.565962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:36.313 11:02:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:36.313 11:02:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:24:36.313 11:02:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:36.573 [2024-07-12 11:02:53.330703] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:36.573 11:02:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:36.573 11:02:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:36.573 11:02:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.573 11:02:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:36.833 Malloc1 00:24:36.833 11:02:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:36.833 11:02:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:37.094 11:02:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:37.354 [2024-07-12 11:02:54.123441] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:37.354 11:02:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:37.354 11:02:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:37.354 11:02:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:37.660 11:02:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:37.660 11:02:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:37.660 11:02:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:37.660 11:02:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:37.660 11:02:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:37.660 11:02:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:37.660 11:02:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:37.660 11:02:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:37.660 11:02:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:37.660 11:02:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:37.660 11:02:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:37.660 11:02:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:37.660 11:02:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:37.660 11:02:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:37.660 11:02:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:37.660 11:02:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:37.660 11:02:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:37.660 11:02:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:37.660 11:02:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:37.660 11:02:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:37.660 11:02:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:37.920 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:37.920 fio-3.35 00:24:37.920 Starting 1 thread 00:24:37.920 EAL: No free 2048 kB hugepages reported on node 1 00:24:40.464 00:24:40.464 test: (groupid=0, jobs=1): err= 0: pid=2203219: Fri Jul 12 11:02:56 2024 00:24:40.464 read: IOPS=13.8k, BW=54.1MiB/s (56.7MB/s)(108MiB/2005msec) 00:24:40.464 slat (usec): min=2, max=288, avg= 2.18, stdev= 2.43 00:24:40.464 clat (usec): min=3416, max=8859, avg=5107.13, stdev=517.54 00:24:40.464 lat (usec): min=3419, max=8872, avg=5109.31, stdev=517.75 00:24:40.464 clat percentiles (usec): 00:24:40.464 | 1.00th=[ 4228], 5.00th=[ 4490], 10.00th=[ 4621], 20.00th=[ 4752], 00:24:40.464 | 30.00th=[ 4883], 40.00th=[ 4948], 50.00th=[ 5080], 60.00th=[ 5145], 00:24:40.464 | 70.00th=[ 5211], 80.00th=[ 5342], 90.00th=[ 5538], 95.00th=[ 5800], 00:24:40.464 | 99.00th=[ 7439], 99.50th=[ 7832], 99.90th=[ 8586], 99.95th=[ 8717], 00:24:40.464 | 99.99th=[ 8848] 00:24:40.464 bw ( KiB/s): min=52648, max=56496, per=100.00%, avg=55426.00, stdev=1860.87, samples=4 00:24:40.464 iops : min=13162, max=14124, avg=13856.50, stdev=465.22, samples=4 00:24:40.464 write: IOPS=13.9k, BW=54.1MiB/s (56.8MB/s)(109MiB/2005msec); 0 zone resets 00:24:40.464 slat (usec): min=2, max=272, avg= 2.30, stdev= 1.80 00:24:40.464 clat (usec): min=2799, max=8321, avg=4101.95, stdev=436.39 00:24:40.464 lat (usec): min=2801, max=8323, avg=4104.25, stdev=436.63 00:24:40.464 clat percentiles (usec): 00:24:40.464 | 1.00th=[ 3294], 5.00th=[ 3589], 10.00th=[ 3687], 20.00th=[ 3818], 00:24:40.464 | 30.00th=[ 3916], 40.00th=[ 3982], 50.00th=[ 4047], 60.00th=[ 4146], 00:24:40.464 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4621], 00:24:40.464 | 99.00th=[ 6063], 99.50th=[ 6390], 99.90th=[ 7177], 99.95th=[ 7898], 00:24:40.464 | 99.99th=[ 8291] 00:24:40.464 bw ( KiB/s): min=52992, max=56440, per=99.98%, avg=55430.00, stdev=1631.33, samples=4 00:24:40.464 iops : min=13248, max=14110, avg=13857.50, stdev=407.83, samples=4 00:24:40.464 lat (msec) : 4=20.57%, 10=79.43% 00:24:40.464 cpu : usr=67.76%, sys=27.20%, ctx=26, majf=0, minf=7 00:24:40.464 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:40.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:40.464 issued rwts: total=27766,27789,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.464 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:40.464 00:24:40.464 Run status group 0 (all jobs): 00:24:40.464 READ: bw=54.1MiB/s (56.7MB/s), 54.1MiB/s-54.1MiB/s (56.7MB/s-56.7MB/s), io=108MiB (114MB), run=2005-2005msec 00:24:40.465 WRITE: bw=54.1MiB/s (56.8MB/s), 54.1MiB/s-54.1MiB/s (56.8MB/s-56.8MB/s), io=109MiB (114MB), run=2005-2005msec 00:24:40.465 11:02:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:40.465 11:02:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:40.465 11:02:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:40.465 11:02:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:40.465 11:02:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:40.465 11:02:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:40.465 11:02:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:40.465 11:02:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:40.465 11:02:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:40.465 11:02:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:40.465 11:02:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:40.465 11:02:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:40.465 11:02:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:40.465 11:02:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:40.465 11:02:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:40.465 11:02:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:40.465 11:02:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:40.465 11:02:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:40.465 11:02:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:40.465 11:02:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:40.465 11:02:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:40.465 11:02:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:40.465 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:40.465 fio-3.35 00:24:40.465 Starting 1 thread 00:24:40.465 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.011 00:24:43.011 test: (groupid=0, jobs=1): err= 0: pid=2203731: Fri Jul 12 11:02:59 2024 00:24:43.011 read: IOPS=9226, BW=144MiB/s (151MB/s)(289MiB/2007msec) 00:24:43.011 slat (usec): min=3, max=114, avg= 3.67, stdev= 1.73 00:24:43.011 clat (usec): min=1203, max=16830, avg=8602.94, stdev=1975.41 00:24:43.011 lat (usec): min=1207, max=16847, avg=8606.61, stdev=1975.67 00:24:43.011 clat percentiles (usec): 00:24:43.011 | 1.00th=[ 4359], 5.00th=[ 5473], 10.00th=[ 6128], 20.00th=[ 6849], 00:24:43.011 | 30.00th=[ 7504], 40.00th=[ 8094], 50.00th=[ 8586], 60.00th=[ 9110], 00:24:43.011 | 70.00th=[ 9503], 80.00th=[10159], 90.00th=[11207], 95.00th=[11863], 00:24:43.011 | 99.00th=[13435], 99.50th=[13829], 99.90th=[16057], 99.95th=[16319], 00:24:43.011 | 99.99th=[16450] 00:24:43.011 bw ( KiB/s): min=69504, max=79776, per=50.20%, avg=74112.00, stdev=4560.42, samples=4 00:24:43.011 iops : min= 4344, max= 4986, avg=4632.00, stdev=285.03, samples=4 00:24:43.011 write: IOPS=5183, BW=81.0MiB/s (84.9MB/s)(150MiB/1855msec); 0 zone resets 00:24:43.011 slat (usec): min=40, max=458, avg=41.40, stdev= 9.58 00:24:43.011 clat (usec): min=2062, max=17850, avg=9266.66, stdev=1486.30 00:24:43.011 lat (usec): min=2102, max=17987, avg=9308.07, stdev=1490.18 00:24:43.011 clat percentiles (usec): 00:24:43.011 | 1.00th=[ 5866], 5.00th=[ 7242], 10.00th=[ 7570], 20.00th=[ 8094], 00:24:43.011 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9503], 00:24:43.011 | 70.00th=[ 9896], 80.00th=[10290], 90.00th=[10945], 95.00th=[11469], 00:24:43.011 | 99.00th=[12911], 99.50th=[16450], 99.90th=[17433], 99.95th=[17695], 00:24:43.011 | 99.99th=[17957] 00:24:43.011 bw ( KiB/s): min=71840, max=82912, per=92.31%, avg=76552.00, stdev=4963.29, samples=4 00:24:43.011 iops : min= 4490, max= 5182, avg=4784.50, stdev=310.21, samples=4 00:24:43.011 lat (msec) : 2=0.02%, 4=0.50%, 10=74.77%, 20=24.71% 00:24:43.011 cpu : usr=83.85%, sys=13.51%, ctx=13, majf=0, minf=4 00:24:43.011 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:24:43.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.011 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:43.011 issued rwts: total=18518,9615,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:43.011 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:43.011 00:24:43.011 Run status group 0 (all jobs): 00:24:43.011 READ: bw=144MiB/s (151MB/s), 144MiB/s-144MiB/s (151MB/s-151MB/s), io=289MiB (303MB), run=2007-2007msec 00:24:43.011 WRITE: bw=81.0MiB/s (84.9MB/s), 81.0MiB/s-81.0MiB/s (84.9MB/s-84.9MB/s), io=150MiB (158MB), run=1855-1855msec 00:24:43.011 11:02:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:43.011 11:02:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:43.011 11:02:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:43.011 11:02:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:43.011 11:02:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:43.011 11:02:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:43.011 11:02:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:43.011 11:02:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:43.011 11:02:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:43.011 11:02:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:43.011 11:02:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:43.011 rmmod nvme_tcp 00:24:43.011 rmmod nvme_fabrics 00:24:43.011 rmmod nvme_keyring 00:24:43.011 11:02:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:43.011 11:02:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:43.011 11:02:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:43.011 11:02:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2202403 ']' 00:24:43.011 11:02:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2202403 00:24:43.011 11:02:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 2202403 ']' 00:24:43.011 11:02:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 2202403 00:24:43.011 11:02:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:24:43.011 11:02:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:43.011 11:02:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2202403 00:24:43.011 11:02:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:43.011 11:02:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:43.011 11:02:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2202403' 00:24:43.011 killing process with pid 2202403 00:24:43.011 11:02:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 2202403 00:24:43.011 11:02:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 2202403 00:24:43.273 11:03:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:43.273 11:03:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:43.273 11:03:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:43.273 11:03:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:43.273 11:03:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:43.273 11:03:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.273 11:03:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:43.273 11:03:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.187 11:03:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:45.187 00:24:45.187 real 0m17.292s 00:24:45.187 user 0m58.359s 00:24:45.187 sys 0m7.491s 00:24:45.187 11:03:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:45.187 11:03:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.187 ************************************ 00:24:45.187 END TEST nvmf_fio_host 00:24:45.187 ************************************ 00:24:45.187 11:03:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:45.187 11:03:02 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:45.187 11:03:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:45.187 11:03:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:45.187 11:03:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:45.447 ************************************ 00:24:45.447 START TEST nvmf_failover 00:24:45.447 ************************************ 00:24:45.447 11:03:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:45.447 * Looking for test storage... 00:24:45.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:45.447 11:03:02 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:45.447 11:03:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:45.447 11:03:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:45.447 11:03:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:45.447 11:03:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:45.447 11:03:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:45.447 11:03:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:45.447 11:03:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:45.447 11:03:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:45.447 11:03:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:45.447 11:03:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:45.447 11:03:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:45.447 11:03:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:45.447 11:03:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:45.447 11:03:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:45.447 11:03:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:45.447 11:03:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:45.447 11:03:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:45.447 11:03:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:45.447 11:03:02 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:45.447 11:03:02 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:45.447 11:03:02 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:45.447 11:03:02 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.447 11:03:02 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.447 11:03:02 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.447 11:03:02 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:45.447 11:03:02 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.447 11:03:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:45.447 11:03:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:45.447 11:03:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:45.447 11:03:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:45.448 11:03:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:45.448 11:03:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:45.448 11:03:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:45.448 11:03:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:45.448 11:03:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:45.448 11:03:02 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:45.448 11:03:02 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:45.448 11:03:02 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:45.448 11:03:02 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:45.448 11:03:02 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:45.448 11:03:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:45.448 11:03:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:45.448 11:03:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:45.448 11:03:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:45.448 11:03:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:45.448 11:03:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.448 11:03:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:45.448 11:03:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.448 11:03:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:45.448 11:03:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:45.448 11:03:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:24:45.448 11:03:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:52.052 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:52.052 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.052 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:52.314 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:52.314 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:52.314 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:52.582 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:52.582 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:52.582 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:52.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:52.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:24:52.582 00:24:52.582 --- 10.0.0.2 ping statistics --- 00:24:52.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.582 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:24:52.582 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:52.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:52.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:24:52.582 00:24:52.582 --- 10.0.0.1 ping statistics --- 00:24:52.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.582 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:24:52.582 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:52.582 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:24:52.582 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:52.582 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:52.582 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:52.582 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:52.582 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:52.582 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:52.582 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:52.582 11:03:09 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:52.582 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:52.582 11:03:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:52.582 11:03:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:52.582 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2208637 00:24:52.582 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2208637 00:24:52.582 11:03:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:52.582 11:03:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2208637 ']' 00:24:52.582 11:03:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.582 11:03:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:52.582 11:03:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.582 11:03:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:52.582 11:03:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:52.582 [2024-07-12 11:03:09.484756] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:52.582 [2024-07-12 11:03:09.484829] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:52.582 EAL: No free 2048 kB hugepages reported on node 1 00:24:52.917 [2024-07-12 11:03:09.572689] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:52.917 [2024-07-12 11:03:09.633245] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:52.917 [2024-07-12 11:03:09.633278] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:52.917 [2024-07-12 11:03:09.633284] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:52.917 [2024-07-12 11:03:09.633290] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:52.917 [2024-07-12 11:03:09.633295] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:52.917 [2024-07-12 11:03:09.633482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:52.917 [2024-07-12 11:03:09.633620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:52.917 [2024-07-12 11:03:09.633623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:53.503 11:03:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:53.503 11:03:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:24:53.503 11:03:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:53.503 11:03:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:53.503 11:03:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:53.503 11:03:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:53.503 11:03:10 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:53.503 [2024-07-12 11:03:10.437476] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:53.503 11:03:10 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:53.764 Malloc0 00:24:53.764 11:03:10 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:54.025 11:03:10 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:54.025 11:03:10 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:54.286 [2024-07-12 11:03:11.139907] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:54.286 11:03:11 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:54.547 [2024-07-12 11:03:11.300322] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:54.547 11:03:11 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:54.547 [2024-07-12 11:03:11.468799] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:54.547 11:03:11 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2209303 00:24:54.547 11:03:11 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:54.547 11:03:11 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:54.547 11:03:11 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2209303 /var/tmp/bdevperf.sock 00:24:54.547 11:03:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2209303 ']' 00:24:54.547 11:03:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:54.547 11:03:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:54.547 11:03:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:54.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:54.547 11:03:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:54.547 11:03:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:55.489 11:03:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:55.489 11:03:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:24:55.489 11:03:12 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:55.750 NVMe0n1 00:24:55.750 11:03:12 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:56.010 00:24:56.010 11:03:12 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2209647 00:24:56.010 11:03:12 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:56.010 11:03:12 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:56.951 11:03:13 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:57.214 [2024-07-12 11:03:14.020916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcec50 is same with the state(5) to be set 00:24:57.214 [2024-07-12 11:03:14.020957] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcec50 is same with the state(5) to be set 00:24:57.214 [2024-07-12 11:03:14.020963] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcec50 is same with the state(5) to be set 00:24:57.214 [2024-07-12 11:03:14.020968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcec50 is same with the state(5) to be set 00:24:57.214 [2024-07-12 11:03:14.020973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcec50 is same with the state(5) to be set 00:24:57.214 [2024-07-12 11:03:14.020978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcec50 is same with the state(5) to be set 00:24:57.214 [2024-07-12 11:03:14.020982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcec50 is same with the state(5) to be set 00:24:57.214 [2024-07-12 11:03:14.020987] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcec50 is same with the state(5) to be set 00:24:57.214 [2024-07-12 11:03:14.020991] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcec50 is same with the state(5) to be set 00:24:57.214 [2024-07-12 11:03:14.020996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcec50 is same with the state(5) to be set 00:24:57.214 [2024-07-12 11:03:14.021000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcec50 is same with the state(5) to be set 00:24:57.214 [2024-07-12 11:03:14.021004] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcec50 is same with the state(5) to be set 00:24:57.214 [2024-07-12 11:03:14.021020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcec50 is same with the state(5) to be set 00:24:57.214 [2024-07-12 11:03:14.021025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcec50 is same with the state(5) to be set 00:24:57.214 [2024-07-12 11:03:14.021030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcec50 is same with the state(5) to be set 00:24:57.214 [2024-07-12 11:03:14.021034] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcec50 is same with the state(5) to be set 00:24:57.214 [2024-07-12 11:03:14.021038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcec50 is same with the state(5) to be set 00:24:57.214 [2024-07-12 11:03:14.021042] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcec50 is same with the state(5) to be set 00:24:57.214 [2024-07-12 11:03:14.021047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcec50 is same with the state(5) to be set 00:24:57.214 [2024-07-12 11:03:14.021051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcec50 is same with the state(5) to be set 00:24:57.214 [2024-07-12 11:03:14.021056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcec50 is same with the state(5) to be set 00:24:57.214 [2024-07-12 11:03:14.021060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcec50 is same with the state(5) to be set 00:24:57.214 [2024-07-12 11:03:14.021064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcec50 is same with the state(5) to be set 00:24:57.214 [2024-07-12 11:03:14.021069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcec50 is same with the state(5) to be set 00:24:57.214 [2024-07-12 11:03:14.021073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcec50 is same with the state(5) to be set 00:24:57.214 [2024-07-12 11:03:14.021077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcec50 is same with the state(5) to be set 00:24:57.214 [2024-07-12 11:03:14.021081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcec50 is same with the state(5) to be set 00:24:57.214 [2024-07-12 11:03:14.021086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcec50 is same with the state(5) to be set 00:24:57.214 [2024-07-12 11:03:14.021090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcec50 is same with the state(5) to be set 00:24:57.214 11:03:14 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:00.516 11:03:17 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:00.516 00:25:00.516 11:03:17 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:00.517 [2024-07-12 11:03:17.486575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0370 is same with the state(5) to be set 00:25:00.517 [2024-07-12 11:03:17.486609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0370 is same with the state(5) to be set 00:25:00.517 [2024-07-12 11:03:17.486614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0370 is same with the state(5) to be set 00:25:00.517 [2024-07-12 11:03:17.486619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0370 is same with the state(5) to be set 00:25:00.517 [2024-07-12 11:03:17.486624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0370 is same with the state(5) to be set 00:25:00.517 [2024-07-12 11:03:17.486628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0370 is same with the state(5) to be set 00:25:00.517 [2024-07-12 11:03:17.486633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0370 is same with the state(5) to be set 00:25:00.517 [2024-07-12 11:03:17.486646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0370 is same with the state(5) to be set 00:25:00.517 [2024-07-12 11:03:17.486650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0370 is same with the state(5) to be set 00:25:00.517 [2024-07-12 11:03:17.486654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0370 is same with the state(5) to be set 00:25:00.517 [2024-07-12 11:03:17.486658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0370 is same with the state(5) to be set 00:25:00.517 [2024-07-12 11:03:17.486663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0370 is same with the state(5) to be set 00:25:00.517 [2024-07-12 11:03:17.486667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0370 is same with the state(5) to be set 00:25:00.517 [2024-07-12 11:03:17.486671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0370 is same with the state(5) to be set 00:25:00.517 [2024-07-12 11:03:17.486676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0370 is same with the state(5) to be set 00:25:00.517 [2024-07-12 11:03:17.486680] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0370 is same with the state(5) to be set 00:25:00.517 [2024-07-12 11:03:17.486684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0370 is same with the state(5) to be set 00:25:00.517 [2024-07-12 11:03:17.486689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0370 is same with the state(5) to be set 00:25:00.517 [2024-07-12 11:03:17.486693] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0370 is same with the state(5) to be set 00:25:00.517 [2024-07-12 11:03:17.486698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0370 is same with the state(5) to be set 00:25:00.517 [2024-07-12 11:03:17.486702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0370 is same with the state(5) to be set 00:25:00.517 [2024-07-12 11:03:17.486706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0370 is same with the state(5) to be set 00:25:00.777 11:03:17 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:04.077 11:03:20 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:04.077 [2024-07-12 11:03:20.664001] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:04.077 11:03:20 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:05.019 11:03:21 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:05.019 [2024-07-12 11:03:21.850865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.850906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.850912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.850917] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.850921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.850926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.850931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.850940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.850945] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.850949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.850953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.850958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.850962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.850966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.850971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.850975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.850979] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.850984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.850988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.850992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.850996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.851001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.851005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.851010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.851014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.851018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.851023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.851027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.851031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.851035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.851040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.851044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.851049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.851053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.851057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.851063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.851067] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.851071] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.851076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.851080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.851084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.851089] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.851093] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.851097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.851102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.851106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.851110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.019 [2024-07-12 11:03:21.851114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.020 [2024-07-12 11:03:21.851119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.020 [2024-07-12 11:03:21.851128] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.020 [2024-07-12 11:03:21.851133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.020 [2024-07-12 11:03:21.851137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.020 [2024-07-12 11:03:21.851142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.020 [2024-07-12 11:03:21.851146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.020 [2024-07-12 11:03:21.851150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.020 [2024-07-12 11:03:21.851154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.020 [2024-07-12 11:03:21.851158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.020 [2024-07-12 11:03:21.851163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.020 [2024-07-12 11:03:21.851167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.020 [2024-07-12 11:03:21.851172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.020 [2024-07-12 11:03:21.851176] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.020 [2024-07-12 11:03:21.851180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.020 [2024-07-12 11:03:21.851186] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.020 [2024-07-12 11:03:21.851190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.020 [2024-07-12 11:03:21.851195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.020 [2024-07-12 11:03:21.851200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.020 [2024-07-12 11:03:21.851205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.020 [2024-07-12 11:03:21.851210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.020 [2024-07-12 11:03:21.851215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.020 [2024-07-12 11:03:21.851219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.020 [2024-07-12 11:03:21.851224] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.020 [2024-07-12 11:03:21.851228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.020 [2024-07-12 11:03:21.851233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.020 [2024-07-12 11:03:21.851237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.020 [2024-07-12 11:03:21.851242] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.020 [2024-07-12 11:03:21.851246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0a70 is same with the state(5) to be set 00:25:05.020 11:03:21 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 2209647 00:25:11.605 0 00:25:11.605 11:03:28 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 2209303 00:25:11.605 11:03:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2209303 ']' 00:25:11.605 11:03:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2209303 00:25:11.605 11:03:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:11.605 11:03:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:11.605 11:03:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2209303 00:25:11.605 11:03:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:11.605 11:03:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:11.605 11:03:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2209303' 00:25:11.605 killing process with pid 2209303 00:25:11.605 11:03:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2209303 00:25:11.605 11:03:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2209303 00:25:11.605 11:03:28 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:11.605 [2024-07-12 11:03:11.545545] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:11.605 [2024-07-12 11:03:11.545602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2209303 ] 00:25:11.605 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.605 [2024-07-12 11:03:11.621390] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.605 [2024-07-12 11:03:11.685209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.605 Running I/O for 15 seconds... 00:25:11.605 [2024-07-12 11:03:14.023043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.605 [2024-07-12 11:03:14.023079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.605 [2024-07-12 11:03:14.023097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.605 [2024-07-12 11:03:14.023106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.605 [2024-07-12 11:03:14.023116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.605 [2024-07-12 11:03:14.023130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.605 [2024-07-12 11:03:14.023140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.605 [2024-07-12 11:03:14.023148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.605 [2024-07-12 11:03:14.023158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.605 [2024-07-12 11:03:14.023166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.605 [2024-07-12 11:03:14.023175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.605 [2024-07-12 11:03:14.023182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.605 [2024-07-12 11:03:14.023192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.605 [2024-07-12 11:03:14.023199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.605 [2024-07-12 11:03:14.023209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.605 [2024-07-12 11:03:14.023216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.605 [2024-07-12 11:03:14.023226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.605 [2024-07-12 11:03:14.023233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.605 [2024-07-12 11:03:14.023243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.605 [2024-07-12 11:03:14.023250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.605 [2024-07-12 11:03:14.023259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.605 [2024-07-12 11:03:14.023267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.605 [2024-07-12 11:03:14.023281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.605 [2024-07-12 11:03:14.023289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.605 [2024-07-12 11:03:14.023298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.605 [2024-07-12 11:03:14.023305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.605 [2024-07-12 11:03:14.023314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.605 [2024-07-12 11:03:14.023321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.605 [2024-07-12 11:03:14.023330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.605 [2024-07-12 11:03:14.023338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.605 [2024-07-12 11:03:14.023347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.605 [2024-07-12 11:03:14.023354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.605 [2024-07-12 11:03:14.023363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.605 [2024-07-12 11:03:14.023370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.605 [2024-07-12 11:03:14.023379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.605 [2024-07-12 11:03:14.023386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.605 [2024-07-12 11:03:14.023395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.605 [2024-07-12 11:03:14.023402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.605 [2024-07-12 11:03:14.023412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.605 [2024-07-12 11:03:14.023419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.605 [2024-07-12 11:03:14.023428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.605 [2024-07-12 11:03:14.023435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.605 [2024-07-12 11:03:14.023445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.605 [2024-07-12 11:03:14.023452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.605 [2024-07-12 11:03:14.023462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.605 [2024-07-12 11:03:14.023469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.605 [2024-07-12 11:03:14.023478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.605 [2024-07-12 11:03:14.023487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.605 [2024-07-12 11:03:14.023496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.605 [2024-07-12 11:03:14.023503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.605 [2024-07-12 11:03:14.023512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.605 [2024-07-12 11:03:14.023519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.605 [2024-07-12 11:03:14.023529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.605 [2024-07-12 11:03:14.023536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.605 [2024-07-12 11:03:14.023545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.605 [2024-07-12 11:03:14.023552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.605 [2024-07-12 11:03:14.023561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.605 [2024-07-12 11:03:14.023568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.605 [2024-07-12 11:03:14.023578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.605 [2024-07-12 11:03:14.023585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.605 [2024-07-12 11:03:14.023594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.605 [2024-07-12 11:03:14.023601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.605 [2024-07-12 11:03:14.023610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.605 [2024-07-12 11:03:14.023617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.605 [2024-07-12 11:03:14.023626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.605 [2024-07-12 11:03:14.023634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.605 [2024-07-12 11:03:14.023642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.023649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.023658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.023666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.023675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.023683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.023695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.023702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.023711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.023718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.023727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.023734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.023743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.023750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.023759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.023766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.023776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.023783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.023792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.023799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.023808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.606 [2024-07-12 11:03:14.023815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.023824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.023832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.023841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.023848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.023857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.023864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.023874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.023881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.023891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.023899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.023908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.023915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.023924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.023932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.023941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.023948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.023957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.023964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.023973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.023980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.023989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.023996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.024005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.024012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.024021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.024028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.024037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.024045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.024054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.024061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.024070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.024076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.024086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.024093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.024102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.024110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.024119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.024130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.024139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.024146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.024156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.024162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.024171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.024178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.024187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.024195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.024204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.024211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.024220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.024227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.024237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.024244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.024253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.024260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.024269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.024276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.024286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.024293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.024302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.024309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.024319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.024326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.606 [2024-07-12 11:03:14.024336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.606 [2024-07-12 11:03:14.024343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.607 [2024-07-12 11:03:14.024363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.607 [2024-07-12 11:03:14.024370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96192 len:8 PRP1 0x0 PRP2 0x0 00:25:11.607 [2024-07-12 11:03:14.024377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.607 [2024-07-12 11:03:14.024388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.607 [2024-07-12 11:03:14.024393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.607 [2024-07-12 11:03:14.024400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96200 len:8 PRP1 0x0 PRP2 0x0 00:25:11.607 [2024-07-12 11:03:14.024407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.607 [2024-07-12 11:03:14.024414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.607 [2024-07-12 11:03:14.024419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.607 [2024-07-12 11:03:14.024425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96208 len:8 PRP1 0x0 PRP2 0x0 00:25:11.607 [2024-07-12 11:03:14.024432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.607 [2024-07-12 11:03:14.024440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.607 [2024-07-12 11:03:14.024445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.607 [2024-07-12 11:03:14.024451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96216 len:8 PRP1 0x0 PRP2 0x0 00:25:11.607 [2024-07-12 11:03:14.024458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.607 [2024-07-12 11:03:14.024465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.607 [2024-07-12 11:03:14.024471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.607 [2024-07-12 11:03:14.024476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96224 len:8 PRP1 0x0 PRP2 0x0 00:25:11.607 [2024-07-12 11:03:14.024484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.607 [2024-07-12 11:03:14.024491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.607 [2024-07-12 11:03:14.024496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.607 [2024-07-12 11:03:14.024502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96232 len:8 PRP1 0x0 PRP2 0x0 00:25:11.607 [2024-07-12 11:03:14.024509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.607 [2024-07-12 11:03:14.024517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.607 [2024-07-12 11:03:14.024522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.607 [2024-07-12 11:03:14.024528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96240 len:8 PRP1 0x0 PRP2 0x0 00:25:11.607 [2024-07-12 11:03:14.024537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.607 [2024-07-12 11:03:14.024545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.607 [2024-07-12 11:03:14.024550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.607 [2024-07-12 11:03:14.024556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96248 len:8 PRP1 0x0 PRP2 0x0 00:25:11.607 [2024-07-12 11:03:14.024563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.607 [2024-07-12 11:03:14.024570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.607 [2024-07-12 11:03:14.024576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.607 [2024-07-12 11:03:14.024582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96256 len:8 PRP1 0x0 PRP2 0x0 00:25:11.607 [2024-07-12 11:03:14.024589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.607 [2024-07-12 11:03:14.024596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.607 [2024-07-12 11:03:14.024602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.607 [2024-07-12 11:03:14.024607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96264 len:8 PRP1 0x0 PRP2 0x0 00:25:11.607 [2024-07-12 11:03:14.024614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.607 [2024-07-12 11:03:14.024622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.607 [2024-07-12 11:03:14.024627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.607 [2024-07-12 11:03:14.024633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96272 len:8 PRP1 0x0 PRP2 0x0 00:25:11.607 [2024-07-12 11:03:14.024640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.607 [2024-07-12 11:03:14.024647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.607 [2024-07-12 11:03:14.024653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.607 [2024-07-12 11:03:14.024658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96280 len:8 PRP1 0x0 PRP2 0x0 00:25:11.607 [2024-07-12 11:03:14.024665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.607 [2024-07-12 11:03:14.024673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.607 [2024-07-12 11:03:14.024679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.607 [2024-07-12 11:03:14.024685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96288 len:8 PRP1 0x0 PRP2 0x0 00:25:11.607 [2024-07-12 11:03:14.024692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.607 [2024-07-12 11:03:14.024700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.607 [2024-07-12 11:03:14.024705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.607 [2024-07-12 11:03:14.024711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96296 len:8 PRP1 0x0 PRP2 0x0 00:25:11.607 [2024-07-12 11:03:14.024718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.607 [2024-07-12 11:03:14.024725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.607 [2024-07-12 11:03:14.024731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.607 [2024-07-12 11:03:14.024739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96304 len:8 PRP1 0x0 PRP2 0x0 00:25:11.607 [2024-07-12 11:03:14.024746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.607 [2024-07-12 11:03:14.024753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.607 [2024-07-12 11:03:14.024758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.607 [2024-07-12 11:03:14.024764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96312 len:8 PRP1 0x0 PRP2 0x0 00:25:11.607 [2024-07-12 11:03:14.024771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.607 [2024-07-12 11:03:14.024779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.607 [2024-07-12 11:03:14.024784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.607 [2024-07-12 11:03:14.024790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96320 len:8 PRP1 0x0 PRP2 0x0 00:25:11.607 [2024-07-12 11:03:14.024797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.607 [2024-07-12 11:03:14.024805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.607 [2024-07-12 11:03:14.024810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.607 [2024-07-12 11:03:14.024816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96328 len:8 PRP1 0x0 PRP2 0x0 00:25:11.607 [2024-07-12 11:03:14.024823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.607 [2024-07-12 11:03:14.024831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.607 [2024-07-12 11:03:14.024836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.607 [2024-07-12 11:03:14.024842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96336 len:8 PRP1 0x0 PRP2 0x0 00:25:11.607 [2024-07-12 11:03:14.024849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.607 [2024-07-12 11:03:14.024856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.607 [2024-07-12 11:03:14.024862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.607 [2024-07-12 11:03:14.024868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96344 len:8 PRP1 0x0 PRP2 0x0 00:25:11.607 [2024-07-12 11:03:14.024875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.607 [2024-07-12 11:03:14.024882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.607 [2024-07-12 11:03:14.024888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.607 [2024-07-12 11:03:14.024894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96352 len:8 PRP1 0x0 PRP2 0x0 00:25:11.607 [2024-07-12 11:03:14.024900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.607 [2024-07-12 11:03:14.024908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.607 [2024-07-12 11:03:14.024913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.607 [2024-07-12 11:03:14.024919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96360 len:8 PRP1 0x0 PRP2 0x0 00:25:11.607 [2024-07-12 11:03:14.024926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.607 [2024-07-12 11:03:14.024935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.607 [2024-07-12 11:03:14.024941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.607 [2024-07-12 11:03:14.024947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96368 len:8 PRP1 0x0 PRP2 0x0 00:25:11.607 [2024-07-12 11:03:14.024954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.607 [2024-07-12 11:03:14.024961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.607 [2024-07-12 11:03:14.024966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.607 [2024-07-12 11:03:14.024972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96376 len:8 PRP1 0x0 PRP2 0x0 00:25:11.607 [2024-07-12 11:03:14.024979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.607 [2024-07-12 11:03:14.024987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.607 [2024-07-12 11:03:14.024992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.607 [2024-07-12 11:03:14.024998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96384 len:8 PRP1 0x0 PRP2 0x0 00:25:11.608 [2024-07-12 11:03:14.025005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.608 [2024-07-12 11:03:14.025012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.608 [2024-07-12 11:03:14.025017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.608 [2024-07-12 11:03:14.025024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96392 len:8 PRP1 0x0 PRP2 0x0 00:25:11.608 [2024-07-12 11:03:14.025031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.608 [2024-07-12 11:03:14.025038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.608 [2024-07-12 11:03:14.025044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.608 [2024-07-12 11:03:14.025049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96400 len:8 PRP1 0x0 PRP2 0x0 00:25:11.608 [2024-07-12 11:03:14.025056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.608 [2024-07-12 11:03:14.025064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.608 [2024-07-12 11:03:14.025069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.608 [2024-07-12 11:03:14.025075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96408 len:8 PRP1 0x0 PRP2 0x0 00:25:11.608 [2024-07-12 11:03:14.025082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.608 [2024-07-12 11:03:14.025089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.608 [2024-07-12 11:03:14.025095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.608 [2024-07-12 11:03:14.025101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96416 len:8 PRP1 0x0 PRP2 0x0 00:25:11.608 [2024-07-12 11:03:14.025108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.608 [2024-07-12 11:03:14.025116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.608 [2024-07-12 11:03:14.025125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.608 [2024-07-12 11:03:14.025131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96424 len:8 PRP1 0x0 PRP2 0x0 00:25:11.608 [2024-07-12 11:03:14.025142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.608 [2024-07-12 11:03:14.025150] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.608 [2024-07-12 11:03:14.025155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.608 [2024-07-12 11:03:14.025161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96432 len:8 PRP1 0x0 PRP2 0x0 00:25:11.608 [2024-07-12 11:03:14.025168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.608 [2024-07-12 11:03:14.025175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.608 [2024-07-12 11:03:14.025181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.608 [2024-07-12 11:03:14.025187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96440 len:8 PRP1 0x0 PRP2 0x0 00:25:11.608 [2024-07-12 11:03:14.025194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.608 [2024-07-12 11:03:14.025201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.608 [2024-07-12 11:03:14.025207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.608 [2024-07-12 11:03:14.025213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96448 len:8 PRP1 0x0 PRP2 0x0 00:25:11.608 [2024-07-12 11:03:14.025220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.608 [2024-07-12 11:03:14.025227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.608 [2024-07-12 11:03:14.025233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.608 [2024-07-12 11:03:14.025238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96456 len:8 PRP1 0x0 PRP2 0x0 00:25:11.608 [2024-07-12 11:03:14.025245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.608 [2024-07-12 11:03:14.025253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.608 [2024-07-12 11:03:14.025258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.608 [2024-07-12 11:03:14.025264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96464 len:8 PRP1 0x0 PRP2 0x0 00:25:11.608 [2024-07-12 11:03:14.025271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.608 [2024-07-12 11:03:14.025278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.608 [2024-07-12 11:03:14.025284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.608 [2024-07-12 11:03:14.025289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96472 len:8 PRP1 0x0 PRP2 0x0 00:25:11.608 [2024-07-12 11:03:14.025296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.608 [2024-07-12 11:03:14.025303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.608 [2024-07-12 11:03:14.025309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.608 [2024-07-12 11:03:14.025315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96480 len:8 PRP1 0x0 PRP2 0x0 00:25:11.608 [2024-07-12 11:03:14.025322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.608 [2024-07-12 11:03:14.025329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.608 [2024-07-12 11:03:14.025335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.608 [2024-07-12 11:03:14.025342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96488 len:8 PRP1 0x0 PRP2 0x0 00:25:11.608 [2024-07-12 11:03:14.025349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.608 [2024-07-12 11:03:14.025357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.608 [2024-07-12 11:03:14.025362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.608 [2024-07-12 11:03:14.025367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96496 len:8 PRP1 0x0 PRP2 0x0 00:25:11.608 [2024-07-12 11:03:14.025374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.608 [2024-07-12 11:03:14.025382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.608 [2024-07-12 11:03:14.025387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.608 [2024-07-12 11:03:14.025393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96504 len:8 PRP1 0x0 PRP2 0x0 00:25:11.608 [2024-07-12 11:03:14.025400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.608 [2024-07-12 11:03:14.025407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.608 [2024-07-12 11:03:14.025412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.608 [2024-07-12 11:03:14.025418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96512 len:8 PRP1 0x0 PRP2 0x0 00:25:11.608 [2024-07-12 11:03:14.025425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.608 [2024-07-12 11:03:14.025437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.608 [2024-07-12 11:03:14.025442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.608 [2024-07-12 11:03:14.025448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96520 len:8 PRP1 0x0 PRP2 0x0 00:25:11.608 [2024-07-12 11:03:14.025455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.608 [2024-07-12 11:03:14.025462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.608 [2024-07-12 11:03:14.025467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.608 [2024-07-12 11:03:14.034984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96528 len:8 PRP1 0x0 PRP2 0x0 00:25:11.608 [2024-07-12 11:03:14.035012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.608 [2024-07-12 11:03:14.035028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.608 [2024-07-12 11:03:14.035034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.608 [2024-07-12 11:03:14.035040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96536 len:8 PRP1 0x0 PRP2 0x0 00:25:11.608 [2024-07-12 11:03:14.035048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.608 [2024-07-12 11:03:14.035056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.608 [2024-07-12 11:03:14.035062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.608 [2024-07-12 11:03:14.035068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96544 len:8 PRP1 0x0 PRP2 0x0 00:25:11.608 [2024-07-12 11:03:14.035075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.608 [2024-07-12 11:03:14.035082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.608 [2024-07-12 11:03:14.035092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.608 [2024-07-12 11:03:14.035098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95712 len:8 PRP1 0x0 PRP2 0x0 00:25:11.608 [2024-07-12 11:03:14.035105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.608 [2024-07-12 11:03:14.035113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.608 [2024-07-12 11:03:14.035119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.608 [2024-07-12 11:03:14.035133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95720 len:8 PRP1 0x0 PRP2 0x0 00:25:11.608 [2024-07-12 11:03:14.035141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.608 [2024-07-12 11:03:14.035148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.608 [2024-07-12 11:03:14.035154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.608 [2024-07-12 11:03:14.035160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95728 len:8 PRP1 0x0 PRP2 0x0 00:25:11.608 [2024-07-12 11:03:14.035167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.608 [2024-07-12 11:03:14.035174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.608 [2024-07-12 11:03:14.035180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.608 [2024-07-12 11:03:14.035185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95736 len:8 PRP1 0x0 PRP2 0x0 00:25:11.608 [2024-07-12 11:03:14.035192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.608 [2024-07-12 11:03:14.035200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.608 [2024-07-12 11:03:14.035205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.609 [2024-07-12 11:03:14.035211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95744 len:8 PRP1 0x0 PRP2 0x0 00:25:11.609 [2024-07-12 11:03:14.035219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.609 [2024-07-12 11:03:14.035226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.609 [2024-07-12 11:03:14.035232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.609 [2024-07-12 11:03:14.035238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95752 len:8 PRP1 0x0 PRP2 0x0 00:25:11.609 [2024-07-12 11:03:14.035245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.609 [2024-07-12 11:03:14.035252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.609 [2024-07-12 11:03:14.035257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.609 [2024-07-12 11:03:14.035263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95760 len:8 PRP1 0x0 PRP2 0x0 00:25:11.609 [2024-07-12 11:03:14.035270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.609 [2024-07-12 11:03:14.035309] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc7c300 was disconnected and freed. reset controller. 00:25:11.609 [2024-07-12 11:03:14.035319] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:11.609 [2024-07-12 11:03:14.035346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.609 [2024-07-12 11:03:14.035357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.609 [2024-07-12 11:03:14.035367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.609 [2024-07-12 11:03:14.035374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.609 [2024-07-12 11:03:14.035383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.609 [2024-07-12 11:03:14.035390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.609 [2024-07-12 11:03:14.035398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.609 [2024-07-12 11:03:14.035405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.609 [2024-07-12 11:03:14.035412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.609 [2024-07-12 11:03:14.035443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc5aef0 (9): Bad file descriptor 00:25:11.609 [2024-07-12 11:03:14.038957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.609 [2024-07-12 11:03:14.251473] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:11.609 [2024-07-12 11:03:17.487012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.609 [2024-07-12 11:03:17.487042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.609 [2024-07-12 11:03:17.487054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.609 [2024-07-12 11:03:17.487060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.609 [2024-07-12 11:03:17.487067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.609 [2024-07-12 11:03:17.487072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.609 [2024-07-12 11:03:17.487079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.609 [2024-07-12 11:03:17.487084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.609 [2024-07-12 11:03:17.487091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.609 [2024-07-12 11:03:17.487096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.609 [2024-07-12 11:03:17.487102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.609 [2024-07-12 11:03:17.487107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.609 [2024-07-12 11:03:17.487114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.609 [2024-07-12 11:03:17.487119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.609 [2024-07-12 11:03:17.487129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.609 [2024-07-12 11:03:17.487134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.609 [2024-07-12 11:03:17.487143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.609 [2024-07-12 11:03:17.487148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.609 [2024-07-12 11:03:17.487155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.609 [2024-07-12 11:03:17.487160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.609 [2024-07-12 11:03:17.487166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.609 [2024-07-12 11:03:17.487171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.609 [2024-07-12 11:03:17.487177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.609 [2024-07-12 11:03:17.487182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.609 [2024-07-12 11:03:17.487188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.609 [2024-07-12 11:03:17.487193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.609 [2024-07-12 11:03:17.487199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.609 [2024-07-12 11:03:17.487204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.609 [2024-07-12 11:03:17.487211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.609 [2024-07-12 11:03:17.487215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.609 [2024-07-12 11:03:17.487222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.609 [2024-07-12 11:03:17.487226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.609 [2024-07-12 11:03:17.487233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.609 [2024-07-12 11:03:17.487237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.609 [2024-07-12 11:03:17.487244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.609 [2024-07-12 11:03:17.487248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.609 [2024-07-12 11:03:17.487255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.609 [2024-07-12 11:03:17.487259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.609 [2024-07-12 11:03:17.487265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.609 [2024-07-12 11:03:17.487270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.609 [2024-07-12 11:03:17.487276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.609 [2024-07-12 11:03:17.487282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.609 [2024-07-12 11:03:17.487288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.609 [2024-07-12 11:03:17.487293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.609 [2024-07-12 11:03:17.487299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.609 [2024-07-12 11:03:17.487304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.609 [2024-07-12 11:03:17.487310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.610 [2024-07-12 11:03:17.487315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.610 [2024-07-12 11:03:17.487326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.610 [2024-07-12 11:03:17.487338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.610 [2024-07-12 11:03:17.487349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.610 [2024-07-12 11:03:17.487360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.610 [2024-07-12 11:03:17.487371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.610 [2024-07-12 11:03:17.487382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.610 [2024-07-12 11:03:17.487393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.610 [2024-07-12 11:03:17.487404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.610 [2024-07-12 11:03:17.487416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.610 [2024-07-12 11:03:17.487429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.610 [2024-07-12 11:03:17.487441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.610 [2024-07-12 11:03:17.487452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.610 [2024-07-12 11:03:17.487463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.610 [2024-07-12 11:03:17.487474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.610 [2024-07-12 11:03:17.487486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.610 [2024-07-12 11:03:17.487497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.610 [2024-07-12 11:03:17.487508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.610 [2024-07-12 11:03:17.487519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.610 [2024-07-12 11:03:17.487531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.610 [2024-07-12 11:03:17.487542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.610 [2024-07-12 11:03:17.487553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.610 [2024-07-12 11:03:17.487565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.610 [2024-07-12 11:03:17.487577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.610 [2024-07-12 11:03:17.487588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.610 [2024-07-12 11:03:17.487599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.610 [2024-07-12 11:03:17.487611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.610 [2024-07-12 11:03:17.487622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.610 [2024-07-12 11:03:17.487633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.610 [2024-07-12 11:03:17.487645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.610 [2024-07-12 11:03:17.487656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.610 [2024-07-12 11:03:17.487667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.610 [2024-07-12 11:03:17.487678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.610 [2024-07-12 11:03:17.487690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.610 [2024-07-12 11:03:17.487702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.610 [2024-07-12 11:03:17.487716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.610 [2024-07-12 11:03:17.487728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.610 [2024-07-12 11:03:17.487739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.610 [2024-07-12 11:03:17.487750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.610 [2024-07-12 11:03:17.487761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.610 [2024-07-12 11:03:17.487772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.610 [2024-07-12 11:03:17.487784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.610 [2024-07-12 11:03:17.487795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.610 [2024-07-12 11:03:17.487802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.487806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.487813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.487818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.487824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.487829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.487835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.487840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.487847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.487852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.487860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.487865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.487871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.487877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.487883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.487888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.487895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.487900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.487906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.487911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.487917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.487922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.487929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.487934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.487940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.487945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.487951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.487956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.487962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.487967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.487974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.487979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.487985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.487990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.487996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.488002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.488009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.488013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.488020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.488025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.488031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.488036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.488042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.488047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.488053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.488058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.488065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.488069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.488076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.488081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.488087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.488092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.488098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.488103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.488109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.488114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.488120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.488129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.488135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.488140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.488148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.488154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.488160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.488165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.488171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.488176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.488183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.488188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.488194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.488199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.488205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.488210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.488216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.488221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.488228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.488233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.488240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.488245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.488251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.488256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.488262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.488267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.488274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.488278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.488285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.611 [2024-07-12 11:03:17.488291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.611 [2024-07-12 11:03:17.488297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:79224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.612 [2024-07-12 11:03:17.488302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:17.488309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.612 [2024-07-12 11:03:17.488313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:17.488320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.612 [2024-07-12 11:03:17.488324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:17.488331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.612 [2024-07-12 11:03:17.488336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:17.488342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.612 [2024-07-12 11:03:17.488347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:17.488353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.612 [2024-07-12 11:03:17.488358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:17.488365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.612 [2024-07-12 11:03:17.488369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:17.488375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.612 [2024-07-12 11:03:17.488380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:17.488387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.612 [2024-07-12 11:03:17.488392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:17.488398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.612 [2024-07-12 11:03:17.488403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:17.488410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.612 [2024-07-12 11:03:17.488415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:17.488421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.612 [2024-07-12 11:03:17.488427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:17.488433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.612 [2024-07-12 11:03:17.488440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:17.488446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.612 [2024-07-12 11:03:17.488451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:17.488458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.612 [2024-07-12 11:03:17.488463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:17.488469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.612 [2024-07-12 11:03:17.488474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:17.488481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.612 [2024-07-12 11:03:17.488486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:17.488493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.612 [2024-07-12 11:03:17.488497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:17.488513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.612 [2024-07-12 11:03:17.488518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.612 [2024-07-12 11:03:17.488523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79640 len:8 PRP1 0x0 PRP2 0x0 00:25:11.612 [2024-07-12 11:03:17.488529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:17.488561] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc7e480 was disconnected and freed. reset controller. 00:25:11.612 [2024-07-12 11:03:17.488569] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:11.612 [2024-07-12 11:03:17.488584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.612 [2024-07-12 11:03:17.488590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:17.488596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.612 [2024-07-12 11:03:17.488601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:17.488607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.612 [2024-07-12 11:03:17.488612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:17.488617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.612 [2024-07-12 11:03:17.488622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:17.488627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.612 [2024-07-12 11:03:17.491062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.612 [2024-07-12 11:03:17.491082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc5aef0 (9): Bad file descriptor 00:25:11.612 [2024-07-12 11:03:17.526712] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:11.612 [2024-07-12 11:03:21.852505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.612 [2024-07-12 11:03:21.852535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:21.852548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.612 [2024-07-12 11:03:21.852554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:21.852561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.612 [2024-07-12 11:03:21.852566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:21.852573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.612 [2024-07-12 11:03:21.852578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:21.852584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.612 [2024-07-12 11:03:21.852589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:21.852595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.612 [2024-07-12 11:03:21.852600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:21.852607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.612 [2024-07-12 11:03:21.852612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:21.852618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.612 [2024-07-12 11:03:21.852623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:21.852629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.612 [2024-07-12 11:03:21.852634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:21.852640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.612 [2024-07-12 11:03:21.852645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:21.852652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.612 [2024-07-12 11:03:21.852657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:21.852663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.612 [2024-07-12 11:03:21.852671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:21.852678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.612 [2024-07-12 11:03:21.852683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:21.852689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.612 [2024-07-12 11:03:21.852694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:21.852700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.612 [2024-07-12 11:03:21.852705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:21.852711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.612 [2024-07-12 11:03:21.852716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.612 [2024-07-12 11:03:21.852722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.612 [2024-07-12 11:03:21.852727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.613 [2024-07-12 11:03:21.852733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.613 [2024-07-12 11:03:21.852738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.613 [2024-07-12 11:03:21.852744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.613 [2024-07-12 11:03:21.852749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.613 [2024-07-12 11:03:21.852755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.613 [2024-07-12 11:03:21.852760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.613 [2024-07-12 11:03:21.852766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.613 [2024-07-12 11:03:21.852771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.613 [2024-07-12 11:03:21.852777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.613 [2024-07-12 11:03:21.852781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.613 [2024-07-12 11:03:21.852788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:14720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.613 [2024-07-12 11:03:21.852793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.613 [2024-07-12 11:03:21.852799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.613 [2024-07-12 11:03:21.852804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.613 [2024-07-12 11:03:21.852811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.613 [2024-07-12 11:03:21.852816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.613 [2024-07-12 11:03:21.852823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.613 [2024-07-12 11:03:21.852827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.613 [2024-07-12 11:03:21.852833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.613 [2024-07-12 11:03:21.852838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.613 [2024-07-12 11:03:21.852844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.613 [2024-07-12 11:03:21.852849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.613 [2024-07-12 11:03:21.852856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.613 [2024-07-12 11:03:21.852861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.613 [2024-07-12 11:03:21.852867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.613 [2024-07-12 11:03:21.852871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.613 [2024-07-12 11:03:21.852877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.613 [2024-07-12 11:03:21.852882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.613 [2024-07-12 11:03:21.852888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.613 [2024-07-12 11:03:21.852893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.613 [2024-07-12 11:03:21.852899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.613 [2024-07-12 11:03:21.852905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.613 [2024-07-12 11:03:21.852911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.613 [2024-07-12 11:03:21.852916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.613 [2024-07-12 11:03:21.852922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.613 [2024-07-12 11:03:21.852927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.613 [2024-07-12 11:03:21.852933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.613 [2024-07-12 11:03:21.852938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.613 [2024-07-12 11:03:21.852945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.613 [2024-07-12 11:03:21.852951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.613 [2024-07-12 11:03:21.852957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.613 [2024-07-12 11:03:21.852962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.613 [2024-07-12 11:03:21.852968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.613 [2024-07-12 11:03:21.852973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.613 [2024-07-12 11:03:21.852979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.613 [2024-07-12 11:03:21.852984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.613 [2024-07-12 11:03:21.852990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.613 [2024-07-12 11:03:21.852995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.613 [2024-07-12 11:03:21.853001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.613 [2024-07-12 11:03:21.853006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.613 [2024-07-12 11:03:21.853012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.613 [2024-07-12 11:03:21.853017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.613 [2024-07-12 11:03:21.853023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.613 [2024-07-12 11:03:21.853028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.613 [2024-07-12 11:03:21.853034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.613 [2024-07-12 11:03:21.853039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.613 [2024-07-12 11:03:21.853045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.613 [2024-07-12 11:03:21.853050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.613 [2024-07-12 11:03:21.853056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.613 [2024-07-12 11:03:21.853061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.613 [2024-07-12 11:03:21.853068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.613 [2024-07-12 11:03:21.853073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.613 [2024-07-12 11:03:21.853079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.613 [2024-07-12 11:03:21.853084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.613 [2024-07-12 11:03:21.853091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.613 [2024-07-12 11:03:21.853097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.613 [2024-07-12 11:03:21.853104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.613 [2024-07-12 11:03:21.853108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.613 [2024-07-12 11:03:21.853114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.613 [2024-07-12 11:03:21.853119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.613 [2024-07-12 11:03:21.853129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.613 [2024-07-12 11:03:21.853134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.613 [2024-07-12 11:03:21.853140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.613 [2024-07-12 11:03:21.853145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.614 [2024-07-12 11:03:21.853157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.614 [2024-07-12 11:03:21.853167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.614 [2024-07-12 11:03:21.853178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.614 [2024-07-12 11:03:21.853189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.614 [2024-07-12 11:03:21.853200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.614 [2024-07-12 11:03:21.853211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.614 [2024-07-12 11:03:21.853222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.614 [2024-07-12 11:03:21.853233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.614 [2024-07-12 11:03:21.853246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.614 [2024-07-12 11:03:21.853257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.614 [2024-07-12 11:03:21.853268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.614 [2024-07-12 11:03:21.853279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.614 [2024-07-12 11:03:21.853291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.614 [2024-07-12 11:03:21.853303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.614 [2024-07-12 11:03:21.853314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.614 [2024-07-12 11:03:21.853325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.614 [2024-07-12 11:03:21.853336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.614 [2024-07-12 11:03:21.853347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.614 [2024-07-12 11:03:21.853358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.614 [2024-07-12 11:03:21.853369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.614 [2024-07-12 11:03:21.853381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.614 [2024-07-12 11:03:21.853392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.614 [2024-07-12 11:03:21.853403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.614 [2024-07-12 11:03:21.853413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.614 [2024-07-12 11:03:21.853424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.614 [2024-07-12 11:03:21.853435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.614 [2024-07-12 11:03:21.853446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.614 [2024-07-12 11:03:21.853457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.614 [2024-07-12 11:03:21.853468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.614 [2024-07-12 11:03:21.853479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.614 [2024-07-12 11:03:21.853490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.614 [2024-07-12 11:03:21.853502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.614 [2024-07-12 11:03:21.853513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.614 [2024-07-12 11:03:21.853525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.614 [2024-07-12 11:03:21.853535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.614 [2024-07-12 11:03:21.853547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.614 [2024-07-12 11:03:21.853558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.614 [2024-07-12 11:03:21.853569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.614 [2024-07-12 11:03:21.853579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.614 [2024-07-12 11:03:21.853591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.614 [2024-07-12 11:03:21.853602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.614 [2024-07-12 11:03:21.853613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.614 [2024-07-12 11:03:21.853619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.615 [2024-07-12 11:03:21.853625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.615 [2024-07-12 11:03:21.853631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.615 [2024-07-12 11:03:21.853636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.615 [2024-07-12 11:03:21.853642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.615 [2024-07-12 11:03:21.853647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.615 [2024-07-12 11:03:21.853653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.615 [2024-07-12 11:03:21.853657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.615 [2024-07-12 11:03:21.853665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.615 [2024-07-12 11:03:21.853669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.615 [2024-07-12 11:03:21.853675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.615 [2024-07-12 11:03:21.853680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.615 [2024-07-12 11:03:21.853686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.615 [2024-07-12 11:03:21.853691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.615 [2024-07-12 11:03:21.853697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.615 [2024-07-12 11:03:21.853702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.615 [2024-07-12 11:03:21.853708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.615 [2024-07-12 11:03:21.853712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.615 [2024-07-12 11:03:21.853719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.615 [2024-07-12 11:03:21.853723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.615 [2024-07-12 11:03:21.853730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.615 [2024-07-12 11:03:21.853734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.615 [2024-07-12 11:03:21.853740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.615 [2024-07-12 11:03:21.853745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.615 [2024-07-12 11:03:21.853751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.615 [2024-07-12 11:03:21.853757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.615 [2024-07-12 11:03:21.853763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.615 [2024-07-12 11:03:21.853767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.615 [2024-07-12 11:03:21.853773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.615 [2024-07-12 11:03:21.853778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.615 [2024-07-12 11:03:21.853784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.615 [2024-07-12 11:03:21.853789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.615 [2024-07-12 11:03:21.853805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.615 [2024-07-12 11:03:21.853813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15376 len:8 PRP1 0x0 PRP2 0x0 00:25:11.615 [2024-07-12 11:03:21.853818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.615 [2024-07-12 11:03:21.853825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.615 [2024-07-12 11:03:21.853829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.615 [2024-07-12 11:03:21.853833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15384 len:8 PRP1 0x0 PRP2 0x0 00:25:11.615 [2024-07-12 11:03:21.853838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.615 [2024-07-12 11:03:21.853843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.615 [2024-07-12 11:03:21.853847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.615 [2024-07-12 11:03:21.853850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:8 PRP1 0x0 PRP2 0x0 00:25:11.615 [2024-07-12 11:03:21.853855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.615 [2024-07-12 11:03:21.853860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.615 [2024-07-12 11:03:21.853863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.615 [2024-07-12 11:03:21.853867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15400 len:8 PRP1 0x0 PRP2 0x0 00:25:11.615 [2024-07-12 11:03:21.853872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.615 [2024-07-12 11:03:21.853877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.615 [2024-07-12 11:03:21.853881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.615 [2024-07-12 11:03:21.853885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15408 len:8 PRP1 0x0 PRP2 0x0 00:25:11.615 [2024-07-12 11:03:21.853889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.615 [2024-07-12 11:03:21.853894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.615 [2024-07-12 11:03:21.853898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.615 [2024-07-12 11:03:21.853902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15416 len:8 PRP1 0x0 PRP2 0x0 00:25:11.615 [2024-07-12 11:03:21.853906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.615 [2024-07-12 11:03:21.853911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.615 [2024-07-12 11:03:21.853915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.615 [2024-07-12 11:03:21.853919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:8 PRP1 0x0 PRP2 0x0 00:25:11.615 [2024-07-12 11:03:21.853924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.615 [2024-07-12 11:03:21.853929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.615 [2024-07-12 11:03:21.853932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.615 [2024-07-12 11:03:21.853936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15432 len:8 PRP1 0x0 PRP2 0x0 00:25:11.615 [2024-07-12 11:03:21.853940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.615 [2024-07-12 11:03:21.853945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.615 [2024-07-12 11:03:21.853950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.615 [2024-07-12 11:03:21.853955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15440 len:8 PRP1 0x0 PRP2 0x0 00:25:11.615 [2024-07-12 11:03:21.853960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.615 [2024-07-12 11:03:21.853965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.615 [2024-07-12 11:03:21.853968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.615 [2024-07-12 11:03:21.853973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15448 len:8 PRP1 0x0 PRP2 0x0 00:25:11.615 [2024-07-12 11:03:21.853977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.615 [2024-07-12 11:03:21.853983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.615 [2024-07-12 11:03:21.853987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.615 [2024-07-12 11:03:21.853991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:8 PRP1 0x0 PRP2 0x0 00:25:11.615 [2024-07-12 11:03:21.853995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.615 [2024-07-12 11:03:21.854000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.615 [2024-07-12 11:03:21.854004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.615 [2024-07-12 11:03:21.854008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15464 len:8 PRP1 0x0 PRP2 0x0 00:25:11.615 [2024-07-12 11:03:21.854013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.615 [2024-07-12 11:03:21.854018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.615 [2024-07-12 11:03:21.854021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.615 [2024-07-12 11:03:21.854025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15472 len:8 PRP1 0x0 PRP2 0x0 00:25:11.615 [2024-07-12 11:03:21.854030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.615 [2024-07-12 11:03:21.854035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.615 [2024-07-12 11:03:21.854039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.615 [2024-07-12 11:03:21.854043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15480 len:8 PRP1 0x0 PRP2 0x0 00:25:11.615 [2024-07-12 11:03:21.854047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.615 [2024-07-12 11:03:21.865061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.615 [2024-07-12 11:03:21.865086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.615 [2024-07-12 11:03:21.865097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:8 PRP1 0x0 PRP2 0x0 00:25:11.615 [2024-07-12 11:03:21.865107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.615 [2024-07-12 11:03:21.865116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.615 [2024-07-12 11:03:21.865121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.615 [2024-07-12 11:03:21.865134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15496 len:8 PRP1 0x0 PRP2 0x0 00:25:11.615 [2024-07-12 11:03:21.865141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.615 [2024-07-12 11:03:21.865185] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc7e270 was disconnected and freed. reset controller. 00:25:11.616 [2024-07-12 11:03:21.865195] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:11.616 [2024-07-12 11:03:21.865222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.616 [2024-07-12 11:03:21.865230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.616 [2024-07-12 11:03:21.865240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.616 [2024-07-12 11:03:21.865246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.616 [2024-07-12 11:03:21.865254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.616 [2024-07-12 11:03:21.865261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.616 [2024-07-12 11:03:21.865268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.616 [2024-07-12 11:03:21.865274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.616 [2024-07-12 11:03:21.865281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.616 [2024-07-12 11:03:21.865308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc5aef0 (9): Bad file descriptor 00:25:11.616 [2024-07-12 11:03:21.868553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.616 [2024-07-12 11:03:22.027160] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:11.616 00:25:11.616 Latency(us) 00:25:11.616 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.616 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:11.616 Verification LBA range: start 0x0 length 0x4000 00:25:11.616 NVMe0n1 : 15.01 12440.00 48.59 1198.16 0.00 9364.48 542.72 21299.20 00:25:11.616 =================================================================================================================== 00:25:11.616 Total : 12440.00 48.59 1198.16 0.00 9364.48 542.72 21299.20 00:25:11.616 Received shutdown signal, test time was about 15.000000 seconds 00:25:11.616 00:25:11.616 Latency(us) 00:25:11.616 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.616 =================================================================================================================== 00:25:11.616 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:11.616 11:03:28 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:11.616 11:03:28 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:11.616 11:03:28 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:11.616 11:03:28 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2212660 00:25:11.616 11:03:28 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2212660 /var/tmp/bdevperf.sock 00:25:11.616 11:03:28 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:11.616 11:03:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2212660 ']' 00:25:11.616 11:03:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:11.616 11:03:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:11.616 11:03:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:11.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:11.616 11:03:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:11.616 11:03:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:12.187 11:03:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:12.187 11:03:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:12.187 11:03:29 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:12.187 [2024-07-12 11:03:29.159799] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:12.448 11:03:29 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:12.448 [2024-07-12 11:03:29.328201] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:12.448 11:03:29 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:12.708 NVMe0n1 00:25:12.708 11:03:29 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:13.279 00:25:13.279 11:03:30 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:13.540 00:25:13.540 11:03:30 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:13.540 11:03:30 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:13.800 11:03:30 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:13.800 11:03:30 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:17.101 11:03:33 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:17.101 11:03:33 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:17.101 11:03:33 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:17.101 11:03:33 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2213672 00:25:17.101 11:03:33 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 2213672 00:25:18.041 0 00:25:18.041 11:03:34 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:18.041 [2024-07-12 11:03:28.247567] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:18.041 [2024-07-12 11:03:28.247626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2212660 ] 00:25:18.041 EAL: No free 2048 kB hugepages reported on node 1 00:25:18.041 [2024-07-12 11:03:28.322783] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.041 [2024-07-12 11:03:28.375308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.041 [2024-07-12 11:03:30.688052] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:18.041 [2024-07-12 11:03:30.688093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.041 [2024-07-12 11:03:30.688103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.041 [2024-07-12 11:03:30.688109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.041 [2024-07-12 11:03:30.688114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.041 [2024-07-12 11:03:30.688119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.041 [2024-07-12 11:03:30.688127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.041 [2024-07-12 11:03:30.688133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.041 [2024-07-12 11:03:30.688138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.041 [2024-07-12 11:03:30.688143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.041 [2024-07-12 11:03:30.688164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.041 [2024-07-12 11:03:30.688175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2248ef0 (9): Bad file descriptor 00:25:18.041 [2024-07-12 11:03:30.698713] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:18.041 Running I/O for 1 seconds... 00:25:18.041 00:25:18.041 Latency(us) 00:25:18.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.041 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:18.041 Verification LBA range: start 0x0 length 0x4000 00:25:18.041 NVMe0n1 : 1.01 12967.43 50.65 0.00 0.00 9831.43 2143.57 8628.91 00:25:18.041 =================================================================================================================== 00:25:18.041 Total : 12967.43 50.65 0.00 0.00 9831.43 2143.57 8628.91 00:25:18.041 11:03:34 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:18.041 11:03:34 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:18.300 11:03:35 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:18.560 11:03:35 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:18.560 11:03:35 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:18.560 11:03:35 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:18.821 11:03:35 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:22.123 11:03:38 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:22.123 11:03:38 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:22.123 11:03:38 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 2212660 00:25:22.123 11:03:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2212660 ']' 00:25:22.123 11:03:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2212660 00:25:22.123 11:03:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:22.123 11:03:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:22.123 11:03:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2212660 00:25:22.123 11:03:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:22.123 11:03:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:22.123 11:03:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2212660' 00:25:22.123 killing process with pid 2212660 00:25:22.123 11:03:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2212660 00:25:22.123 11:03:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2212660 00:25:22.123 11:03:38 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:22.123 11:03:38 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:22.384 11:03:39 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:22.384 11:03:39 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:22.384 11:03:39 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:22.384 11:03:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:22.384 11:03:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:22.384 11:03:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:22.384 11:03:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:22.384 11:03:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:22.384 11:03:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:22.384 rmmod nvme_tcp 00:25:22.384 rmmod nvme_fabrics 00:25:22.384 rmmod nvme_keyring 00:25:22.384 11:03:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:22.384 11:03:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:22.384 11:03:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:22.384 11:03:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2208637 ']' 00:25:22.384 11:03:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2208637 00:25:22.385 11:03:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2208637 ']' 00:25:22.385 11:03:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2208637 00:25:22.385 11:03:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:22.385 11:03:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:22.385 11:03:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2208637 00:25:22.385 11:03:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:22.385 11:03:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:22.385 11:03:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2208637' 00:25:22.385 killing process with pid 2208637 00:25:22.385 11:03:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2208637 00:25:22.385 11:03:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2208637 00:25:22.645 11:03:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:22.645 11:03:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:22.645 11:03:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:22.645 11:03:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:22.645 11:03:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:22.645 11:03:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.645 11:03:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:22.645 11:03:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.600 11:03:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:24.600 00:25:24.600 real 0m39.284s 00:25:24.600 user 2m1.393s 00:25:24.600 sys 0m8.017s 00:25:24.600 11:03:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:24.600 11:03:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:24.600 ************************************ 00:25:24.600 END TEST nvmf_failover 00:25:24.600 ************************************ 00:25:24.600 11:03:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:24.600 11:03:41 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:24.600 11:03:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:24.600 11:03:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:24.600 11:03:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:24.600 ************************************ 00:25:24.600 START TEST nvmf_host_discovery 00:25:24.600 ************************************ 00:25:24.600 11:03:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:24.899 * Looking for test storage... 00:25:24.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:24.899 11:03:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:24.899 11:03:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:24.899 11:03:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:24.899 11:03:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:24.899 11:03:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:24.899 11:03:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:24.899 11:03:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:24.899 11:03:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:24.899 11:03:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:24.899 11:03:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:24.899 11:03:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:24.899 11:03:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:24.899 11:03:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:24.899 11:03:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:24.899 11:03:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:24.899 11:03:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:24.899 11:03:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:24.899 11:03:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:24.899 11:03:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:24.899 11:03:41 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:24.899 11:03:41 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:24.899 11:03:41 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:24.899 11:03:41 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.899 11:03:41 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.899 11:03:41 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.899 11:03:41 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:24.900 11:03:41 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.900 11:03:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:24.900 11:03:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:24.900 11:03:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:24.900 11:03:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:24.900 11:03:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:24.900 11:03:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:24.900 11:03:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:24.900 11:03:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:24.900 11:03:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:24.900 11:03:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:24.900 11:03:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:24.900 11:03:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:24.900 11:03:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:24.900 11:03:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:24.900 11:03:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:24.900 11:03:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:24.900 11:03:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:24.900 11:03:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:24.900 11:03:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:24.900 11:03:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:24.900 11:03:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:24.900 11:03:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.900 11:03:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:24.900 11:03:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.900 11:03:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:24.900 11:03:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:24.900 11:03:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:25:24.900 11:03:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.044 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:33.044 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:25:33.044 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:33.044 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:33.044 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:33.044 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:33.044 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:33.044 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:25:33.044 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:33.044 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:25:33.044 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:25:33.044 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:25:33.044 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:25:33.044 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:25:33.044 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:25:33.044 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:33.044 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:33.044 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:33.044 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:33.044 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:33.044 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:33.044 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:33.044 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:33.044 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:33.044 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:33.044 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:33.045 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:33.045 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:33.045 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:33.045 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:33.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:33.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.486 ms 00:25:33.045 00:25:33.045 --- 10.0.0.2 ping statistics --- 00:25:33.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.045 rtt min/avg/max/mdev = 0.486/0.486/0.486/0.000 ms 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:33.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:33.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.370 ms 00:25:33.045 00:25:33.045 --- 10.0.0.1 ping statistics --- 00:25:33.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.045 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2218853 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2218853 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2218853 ']' 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:33.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:33.045 11:03:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.045 [2024-07-12 11:03:49.030708] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:33.045 [2024-07-12 11:03:49.030798] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:33.045 EAL: No free 2048 kB hugepages reported on node 1 00:25:33.045 [2024-07-12 11:03:49.124118] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.045 [2024-07-12 11:03:49.217525] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:33.045 [2024-07-12 11:03:49.217581] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:33.045 [2024-07-12 11:03:49.217590] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:33.045 [2024-07-12 11:03:49.217603] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:33.045 [2024-07-12 11:03:49.217610] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:33.045 [2024-07-12 11:03:49.217636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:33.045 11:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:33.045 11:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:25:33.045 11:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:33.045 11:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:33.045 11:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.045 11:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:33.045 11:03:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:33.045 11:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.045 11:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.045 [2024-07-12 11:03:49.861296] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:33.045 11:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.045 11:03:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:33.045 11:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.046 11:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.046 [2024-07-12 11:03:49.873492] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:33.046 11:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.046 11:03:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:33.046 11:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.046 11:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.046 null0 00:25:33.046 11:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.046 11:03:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:33.046 11:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.046 11:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.046 null1 00:25:33.046 11:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.046 11:03:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:33.046 11:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.046 11:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.046 11:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.046 11:03:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2219033 00:25:33.046 11:03:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2219033 /tmp/host.sock 00:25:33.046 11:03:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:33.046 11:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2219033 ']' 00:25:33.046 11:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:25:33.046 11:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:33.046 11:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:33.046 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:33.046 11:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:33.046 11:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.046 [2024-07-12 11:03:49.968952] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:33.046 [2024-07-12 11:03:49.969008] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2219033 ] 00:25:33.046 EAL: No free 2048 kB hugepages reported on node 1 00:25:33.307 [2024-07-12 11:03:50.053407] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.307 [2024-07-12 11:03:50.155130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.880 11:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:33.880 11:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:25:33.880 11:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:33.880 11:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:33.880 11:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.880 11:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.880 11:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.880 11:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:33.880 11:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.880 11:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.880 11:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.880 11:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:33.880 11:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:33.880 11:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:33.880 11:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.880 11:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.880 11:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:33.880 11:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:33.880 11:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:33.880 11:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.880 11:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:33.880 11:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:33.880 11:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:33.880 11:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:33.880 11:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.880 11:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.880 11:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:33.880 11:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:33.880 11:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.142 11:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:34.142 11:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:34.142 11:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.142 11:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.142 11:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.142 11:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:34.142 11:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:34.142 11:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:34.142 11:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.142 11:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:34.142 11:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.142 11:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:34.142 11:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.142 11:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:34.142 11:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:34.142 11:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:34.142 11:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:34.142 11:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.142 11:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.142 11:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:34.142 11:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:34.142 11:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.142 11:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:34.142 11:03:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:34.142 11:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.142 11:03:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.142 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.142 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:34.142 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:34.142 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:34.142 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.142 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:34.142 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.142 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:34.142 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.142 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:34.142 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:34.142 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:34.142 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:34.142 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.142 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.142 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:34.142 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:34.142 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.142 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:34.142 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:34.142 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.142 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.142 [2024-07-12 11:03:51.125175] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:25:34.404 11:03:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:25:34.976 [2024-07-12 11:03:51.821369] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:34.976 [2024-07-12 11:03:51.821403] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:34.976 [2024-07-12 11:03:51.821421] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:34.976 [2024-07-12 11:03:51.909692] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:35.236 [2024-07-12 11:03:52.136426] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:35.236 [2024-07-12 11:03:52.136462] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:35.498 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.760 [2024-07-12 11:03:52.689488] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:35.760 [2024-07-12 11:03:52.690029] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:35.760 [2024-07-12 11:03:52.690064] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:35.760 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.021 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.021 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:36.021 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:36.021 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:36.021 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:36.021 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:36.021 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:36.021 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:36.021 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:36.021 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:36.021 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:36.021 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.021 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:36.021 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.021 [2024-07-12 11:03:52.779365] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:36.021 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.021 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:36.021 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:36.021 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:36.021 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:36.021 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:36.021 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:36.021 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:36.021 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:36.021 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:36.021 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:36.021 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.021 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.021 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:36.021 11:03:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:36.021 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.021 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:36.021 11:03:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:25:36.021 [2024-07-12 11:03:52.887259] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:36.021 [2024-07-12 11:03:52.887288] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:36.021 [2024-07-12 11:03:52.887294] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:36.962 11:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:36.962 11:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:36.962 11:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:36.962 11:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:36.962 11:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:36.962 11:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.962 11:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:36.962 11:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.962 11:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:36.962 11:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.962 11:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:36.962 11:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:36.962 11:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:36.962 11:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:36.962 11:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:36.962 11:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:36.962 11:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:36.962 11:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:36.962 11:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:36.962 11:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:36.962 11:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:36.962 11:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:36.962 11:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.962 11:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.962 11:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.224 11:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:37.224 11:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:37.224 11:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:37.224 11:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:37.224 11:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:37.224 11:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.224 11:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.224 [2024-07-12 11:03:53.977335] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:37.224 [2024-07-12 11:03:53.977356] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:37.224 [2024-07-12 11:03:53.977662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.224 [2024-07-12 11:03:53.977679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.224 [2024-07-12 11:03:53.977689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.224 [2024-07-12 11:03:53.977696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.224 [2024-07-12 11:03:53.977704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.224 [2024-07-12 11:03:53.977711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.224 [2024-07-12 11:03:53.977719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.224 [2024-07-12 11:03:53.977726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.224 [2024-07-12 11:03:53.977733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c49b0 is same with the state(5) to be set 00:25:37.224 11:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.224 11:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:37.224 11:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:37.224 11:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:37.224 11:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:37.224 11:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:37.224 11:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:37.224 11:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:37.225 [2024-07-12 11:03:53.987673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c49b0 (9): Bad file descriptor 00:25:37.225 11:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:37.225 11:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.225 11:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:37.225 11:03:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.225 11:03:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:37.225 [2024-07-12 11:03:53.997710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:37.225 [2024-07-12 11:03:53.998151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.225 [2024-07-12 11:03:53.998165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c49b0 with addr=10.0.0.2, port=4420 00:25:37.225 [2024-07-12 11:03:53.998173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c49b0 is same with the state(5) to be set 00:25:37.225 [2024-07-12 11:03:53.998185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c49b0 (9): Bad file descriptor 00:25:37.225 [2024-07-12 11:03:53.998200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:37.225 [2024-07-12 11:03:53.998207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:37.225 [2024-07-12 11:03:53.998215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:37.225 [2024-07-12 11:03:53.998227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:37.225 11:03:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.225 [2024-07-12 11:03:54.007762] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:37.225 [2024-07-12 11:03:54.008121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.225 [2024-07-12 11:03:54.008132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c49b0 with addr=10.0.0.2, port=4420 00:25:37.225 [2024-07-12 11:03:54.008137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c49b0 is same with the state(5) to be set 00:25:37.225 [2024-07-12 11:03:54.008145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c49b0 (9): Bad file descriptor 00:25:37.225 [2024-07-12 11:03:54.008156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:37.225 [2024-07-12 11:03:54.008161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:37.225 [2024-07-12 11:03:54.008166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:37.225 [2024-07-12 11:03:54.008173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:37.225 [2024-07-12 11:03:54.017807] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:37.225 [2024-07-12 11:03:54.018039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.225 [2024-07-12 11:03:54.018053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c49b0 with addr=10.0.0.2, port=4420 00:25:37.225 [2024-07-12 11:03:54.018058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c49b0 is same with the state(5) to be set 00:25:37.225 [2024-07-12 11:03:54.018065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c49b0 (9): Bad file descriptor 00:25:37.225 [2024-07-12 11:03:54.018072] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:37.225 [2024-07-12 11:03:54.018077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:37.225 [2024-07-12 11:03:54.018081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:37.225 [2024-07-12 11:03:54.018088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:37.225 [2024-07-12 11:03:54.027850] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:37.225 [2024-07-12 11:03:54.028388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.225 [2024-07-12 11:03:54.028417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c49b0 with addr=10.0.0.2, port=4420 00:25:37.225 [2024-07-12 11:03:54.028426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c49b0 is same with the state(5) to be set 00:25:37.225 [2024-07-12 11:03:54.028439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c49b0 (9): Bad file descriptor 00:25:37.225 [2024-07-12 11:03:54.028457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:37.225 [2024-07-12 11:03:54.028463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:37.225 [2024-07-12 11:03:54.028471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:37.225 [2024-07-12 11:03:54.028482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:37.225 [2024-07-12 11:03:54.037896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:37.225 11:03:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.225 [2024-07-12 11:03:54.038347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.225 [2024-07-12 11:03:54.038377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c49b0 with addr=10.0.0.2, port=4420 00:25:37.225 [2024-07-12 11:03:54.038385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c49b0 is same with the state(5) to be set 00:25:37.225 [2024-07-12 11:03:54.038399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c49b0 (9): Bad file descriptor 00:25:37.225 [2024-07-12 11:03:54.038407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:37.225 [2024-07-12 11:03:54.038414] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:37.225 [2024-07-12 11:03:54.038420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:37.225 [2024-07-12 11:03:54.038430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:37.225 11:03:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:37.225 11:03:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:37.225 11:03:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:37.225 11:03:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:37.225 11:03:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:37.225 11:03:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:37.225 11:03:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:37.225 11:03:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:37.225 11:03:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:37.225 11:03:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.225 11:03:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:37.225 11:03:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.225 11:03:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:37.225 [2024-07-12 11:03:54.047943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:37.225 [2024-07-12 11:03:54.048396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.225 [2024-07-12 11:03:54.048425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c49b0 with addr=10.0.0.2, port=4420 00:25:37.225 [2024-07-12 11:03:54.048434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c49b0 is same with the state(5) to be set 00:25:37.225 [2024-07-12 11:03:54.048447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c49b0 (9): Bad file descriptor 00:25:37.225 [2024-07-12 11:03:54.048455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:37.225 [2024-07-12 11:03:54.048460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:37.225 [2024-07-12 11:03:54.048465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:37.225 [2024-07-12 11:03:54.048475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:37.225 [2024-07-12 11:03:54.057990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:37.225 [2024-07-12 11:03:54.058455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.225 [2024-07-12 11:03:54.058484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c49b0 with addr=10.0.0.2, port=4420 00:25:37.225 [2024-07-12 11:03:54.058493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c49b0 is same with the state(5) to be set 00:25:37.225 [2024-07-12 11:03:54.058506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c49b0 (9): Bad file descriptor 00:25:37.225 [2024-07-12 11:03:54.058525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:37.225 [2024-07-12 11:03:54.058530] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:37.225 [2024-07-12 11:03:54.058535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:37.225 [2024-07-12 11:03:54.058545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:37.225 [2024-07-12 11:03:54.068037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:37.225 [2024-07-12 11:03:54.068431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.225 [2024-07-12 11:03:54.068441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c49b0 with addr=10.0.0.2, port=4420 00:25:37.225 [2024-07-12 11:03:54.068446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c49b0 is same with the state(5) to be set 00:25:37.225 [2024-07-12 11:03:54.068454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c49b0 (9): Bad file descriptor 00:25:37.225 [2024-07-12 11:03:54.068461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:37.225 [2024-07-12 11:03:54.068465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:37.225 [2024-07-12 11:03:54.068470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:37.225 [2024-07-12 11:03:54.068477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:37.225 [2024-07-12 11:03:54.078083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:37.225 [2024-07-12 11:03:54.078422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.225 [2024-07-12 11:03:54.078431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c49b0 with addr=10.0.0.2, port=4420 00:25:37.225 [2024-07-12 11:03:54.078436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c49b0 is same with the state(5) to be set 00:25:37.225 [2024-07-12 11:03:54.078443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c49b0 (9): Bad file descriptor 00:25:37.225 [2024-07-12 11:03:54.078450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:37.225 [2024-07-12 11:03:54.078454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:37.225 [2024-07-12 11:03:54.078458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:37.225 [2024-07-12 11:03:54.078466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:37.225 11:03:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.225 [2024-07-12 11:03:54.088129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:37.225 [2024-07-12 11:03:54.088508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.225 [2024-07-12 11:03:54.088516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c49b0 with addr=10.0.0.2, port=4420 00:25:37.225 [2024-07-12 11:03:54.088524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c49b0 is same with the state(5) to be set 00:25:37.225 [2024-07-12 11:03:54.088532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c49b0 (9): Bad file descriptor 00:25:37.225 [2024-07-12 11:03:54.088538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:37.225 [2024-07-12 11:03:54.088543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:37.225 [2024-07-12 11:03:54.088547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:37.225 [2024-07-12 11:03:54.088554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:37.225 11:03:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:37.225 11:03:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:37.225 11:03:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:37.225 11:03:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:37.226 11:03:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:37.226 11:03:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:37.226 11:03:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:37.226 11:03:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:37.226 [2024-07-12 11:03:54.098172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:37.226 11:03:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:37.226 11:03:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:37.226 [2024-07-12 11:03:54.099092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.226 [2024-07-12 11:03:54.099109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c49b0 with addr=10.0.0.2, port=4420 00:25:37.226 [2024-07-12 11:03:54.099115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c49b0 is same with the state(5) to be set 00:25:37.226 [2024-07-12 11:03:54.099133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c49b0 (9): Bad file descriptor 00:25:37.226 [2024-07-12 11:03:54.099148] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:37.226 [2024-07-12 11:03:54.099153] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:37.226 [2024-07-12 11:03:54.099158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:37.226 [2024-07-12 11:03:54.099167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:37.226 11:03:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:37.226 11:03:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.226 11:03:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.226 11:03:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:37.226 [2024-07-12 11:03:54.106166] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:37.226 [2024-07-12 11:03:54.106181] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:37.226 11:03:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.226 11:03:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:25:37.226 11:03:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:25:38.187 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:38.187 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:38.187 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:38.187 11:03:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:38.187 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.187 11:03:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:38.187 11:03:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:38.187 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.187 11:03:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:38.187 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.448 11:03:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.831 [2024-07-12 11:03:56.473327] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:39.831 [2024-07-12 11:03:56.473343] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:39.831 [2024-07-12 11:03:56.473354] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:39.831 [2024-07-12 11:03:56.561604] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:40.092 [2024-07-12 11:03:56.832611] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:40.092 [2024-07-12 11:03:56.832636] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.092 request: 00:25:40.092 { 00:25:40.092 "name": "nvme", 00:25:40.092 "trtype": "tcp", 00:25:40.092 "traddr": "10.0.0.2", 00:25:40.092 "adrfam": "ipv4", 00:25:40.092 "trsvcid": "8009", 00:25:40.092 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:40.092 "wait_for_attach": true, 00:25:40.092 "method": "bdev_nvme_start_discovery", 00:25:40.092 "req_id": 1 00:25:40.092 } 00:25:40.092 Got JSON-RPC error response 00:25:40.092 response: 00:25:40.092 { 00:25:40.092 "code": -17, 00:25:40.092 "message": "File exists" 00:25:40.092 } 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.092 request: 00:25:40.092 { 00:25:40.092 "name": "nvme_second", 00:25:40.092 "trtype": "tcp", 00:25:40.092 "traddr": "10.0.0.2", 00:25:40.092 "adrfam": "ipv4", 00:25:40.092 "trsvcid": "8009", 00:25:40.092 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:40.092 "wait_for_attach": true, 00:25:40.092 "method": "bdev_nvme_start_discovery", 00:25:40.092 "req_id": 1 00:25:40.092 } 00:25:40.092 Got JSON-RPC error response 00:25:40.092 response: 00:25:40.092 { 00:25:40.092 "code": -17, 00:25:40.092 "message": "File exists" 00:25:40.092 } 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:40.092 11:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.092 11:03:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:40.092 11:03:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:40.092 11:03:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:40.092 11:03:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:40.092 11:03:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.092 11:03:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:40.092 11:03:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.092 11:03:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:40.092 11:03:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.353 11:03:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:40.353 11:03:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:40.353 11:03:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:40.353 11:03:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:40.353 11:03:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:40.353 11:03:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:40.353 11:03:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:40.353 11:03:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:40.353 11:03:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:40.353 11:03:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.353 11:03:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.294 [2024-07-12 11:03:58.092860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.294 [2024-07-12 11:03:58.092882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13e08c0 with addr=10.0.0.2, port=8010 00:25:41.294 [2024-07-12 11:03:58.092892] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:41.294 [2024-07-12 11:03:58.092897] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:41.294 [2024-07-12 11:03:58.092902] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:42.250 [2024-07-12 11:03:59.095165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.251 [2024-07-12 11:03:59.095182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13e08c0 with addr=10.0.0.2, port=8010 00:25:42.251 [2024-07-12 11:03:59.095191] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:42.251 [2024-07-12 11:03:59.095195] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:42.251 [2024-07-12 11:03:59.095200] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:43.193 [2024-07-12 11:04:00.097153] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:43.193 request: 00:25:43.193 { 00:25:43.193 "name": "nvme_second", 00:25:43.193 "trtype": "tcp", 00:25:43.193 "traddr": "10.0.0.2", 00:25:43.193 "adrfam": "ipv4", 00:25:43.193 "trsvcid": "8010", 00:25:43.193 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:43.193 "wait_for_attach": false, 00:25:43.193 "attach_timeout_ms": 3000, 00:25:43.193 "method": "bdev_nvme_start_discovery", 00:25:43.193 "req_id": 1 00:25:43.193 } 00:25:43.193 Got JSON-RPC error response 00:25:43.193 response: 00:25:43.193 { 00:25:43.193 "code": -110, 00:25:43.193 "message": "Connection timed out" 00:25:43.193 } 00:25:43.193 11:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:43.193 11:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:43.193 11:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:43.193 11:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:43.193 11:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:43.193 11:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:43.193 11:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:43.193 11:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:43.193 11:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.193 11:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:43.193 11:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.193 11:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:43.193 11:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.193 11:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:43.193 11:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:43.193 11:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2219033 00:25:43.193 11:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:43.193 11:04:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:43.193 11:04:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:25:43.193 11:04:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:43.193 11:04:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:25:43.193 11:04:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:43.193 11:04:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:43.193 rmmod nvme_tcp 00:25:43.454 rmmod nvme_fabrics 00:25:43.454 rmmod nvme_keyring 00:25:43.454 11:04:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:43.454 11:04:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:25:43.454 11:04:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:25:43.454 11:04:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2218853 ']' 00:25:43.454 11:04:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2218853 00:25:43.454 11:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 2218853 ']' 00:25:43.454 11:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 2218853 00:25:43.454 11:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:25:43.454 11:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:43.454 11:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2218853 00:25:43.454 11:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:43.454 11:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:43.454 11:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2218853' 00:25:43.454 killing process with pid 2218853 00:25:43.454 11:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 2218853 00:25:43.454 11:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 2218853 00:25:43.454 11:04:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:43.454 11:04:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:43.454 11:04:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:43.454 11:04:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:43.454 11:04:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:43.454 11:04:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.454 11:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:43.454 11:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:45.999 00:25:45.999 real 0m20.918s 00:25:45.999 user 0m25.403s 00:25:45.999 sys 0m6.984s 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.999 ************************************ 00:25:45.999 END TEST nvmf_host_discovery 00:25:45.999 ************************************ 00:25:45.999 11:04:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:45.999 11:04:02 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:45.999 11:04:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:45.999 11:04:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:45.999 11:04:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:45.999 ************************************ 00:25:45.999 START TEST nvmf_host_multipath_status 00:25:45.999 ************************************ 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:45.999 * Looking for test storage... 00:25:45.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:25:45.999 11:04:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:54.146 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:54.146 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:54.146 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:54.147 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:54.147 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:54.147 11:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:54.147 11:04:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:54.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:54.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.472 ms 00:25:54.147 00:25:54.147 --- 10.0.0.2 ping statistics --- 00:25:54.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.147 rtt min/avg/max/mdev = 0.472/0.472/0.472/0.000 ms 00:25:54.147 11:04:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:54.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:54.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:25:54.147 00:25:54.147 --- 10.0.0.1 ping statistics --- 00:25:54.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.147 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:25:54.147 11:04:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:54.147 11:04:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:25:54.147 11:04:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:54.147 11:04:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:54.147 11:04:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:54.147 11:04:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:54.147 11:04:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:54.147 11:04:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:54.147 11:04:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:54.147 11:04:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:54.147 11:04:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:54.147 11:04:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:54.147 11:04:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:54.147 11:04:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2225238 00:25:54.147 11:04:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2225238 00:25:54.147 11:04:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:54.147 11:04:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2225238 ']' 00:25:54.147 11:04:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.147 11:04:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:54.147 11:04:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:54.147 11:04:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:54.147 11:04:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:54.147 [2024-07-12 11:04:10.140664] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:54.147 [2024-07-12 11:04:10.140732] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:54.147 EAL: No free 2048 kB hugepages reported on node 1 00:25:54.147 [2024-07-12 11:04:10.228610] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:54.147 [2024-07-12 11:04:10.327795] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:54.147 [2024-07-12 11:04:10.327856] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:54.147 [2024-07-12 11:04:10.327864] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:54.147 [2024-07-12 11:04:10.327871] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:54.147 [2024-07-12 11:04:10.327877] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:54.147 [2024-07-12 11:04:10.327960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:54.147 [2024-07-12 11:04:10.327962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.147 11:04:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:54.147 11:04:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:25:54.147 11:04:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:54.147 11:04:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:54.147 11:04:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:54.147 11:04:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:54.147 11:04:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2225238 00:25:54.147 11:04:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:54.147 [2024-07-12 11:04:11.121101] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:54.409 11:04:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:54.409 Malloc0 00:25:54.409 11:04:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:54.671 11:04:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:54.932 11:04:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:54.932 [2024-07-12 11:04:11.863749] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:54.932 11:04:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:55.193 [2024-07-12 11:04:12.048236] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:55.193 11:04:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2225679 00:25:55.193 11:04:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:55.193 11:04:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:55.193 11:04:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2225679 /var/tmp/bdevperf.sock 00:25:55.193 11:04:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2225679 ']' 00:25:55.193 11:04:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:55.193 11:04:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:55.193 11:04:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:55.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:55.193 11:04:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:55.193 11:04:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:56.148 11:04:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:56.148 11:04:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:25:56.148 11:04:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:56.148 11:04:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:25:56.412 Nvme0n1 00:25:56.672 11:04:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:56.932 Nvme0n1 00:25:56.932 11:04:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:56.932 11:04:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:58.846 11:04:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:58.846 11:04:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:59.184 11:04:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:59.184 11:04:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:00.138 11:04:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:00.138 11:04:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:00.396 11:04:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.396 11:04:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:00.396 11:04:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.396 11:04:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:00.396 11:04:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.396 11:04:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:00.656 11:04:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:00.656 11:04:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:00.656 11:04:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.656 11:04:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:00.656 11:04:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.656 11:04:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:00.656 11:04:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.656 11:04:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:00.916 11:04:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.917 11:04:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:00.917 11:04:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.917 11:04:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:01.177 11:04:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.177 11:04:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:01.177 11:04:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.177 11:04:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:01.177 11:04:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.177 11:04:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:01.177 11:04:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:01.437 11:04:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:01.696 11:04:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:02.636 11:04:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:02.636 11:04:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:02.636 11:04:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.636 11:04:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:02.636 11:04:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:02.636 11:04:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:02.896 11:04:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.896 11:04:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:02.896 11:04:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.896 11:04:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:02.896 11:04:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.896 11:04:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:03.157 11:04:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.157 11:04:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:03.157 11:04:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.157 11:04:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:03.157 11:04:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.157 11:04:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:03.157 11:04:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.157 11:04:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:03.418 11:04:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.418 11:04:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:03.418 11:04:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.418 11:04:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:03.679 11:04:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.679 11:04:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:03.679 11:04:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:03.679 11:04:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:03.941 11:04:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:04.884 11:04:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:04.884 11:04:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:04.884 11:04:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.884 11:04:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:05.144 11:04:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.144 11:04:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:05.144 11:04:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.144 11:04:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:05.144 11:04:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:05.144 11:04:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:05.144 11:04:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.144 11:04:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:05.405 11:04:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.405 11:04:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:05.405 11:04:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.405 11:04:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:05.667 11:04:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.667 11:04:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:05.667 11:04:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.667 11:04:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:05.667 11:04:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.667 11:04:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:05.667 11:04:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.667 11:04:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:05.928 11:04:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.928 11:04:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:05.928 11:04:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:06.192 11:04:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:06.192 11:04:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:07.578 11:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:07.578 11:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:07.578 11:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.578 11:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:07.578 11:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.578 11:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:07.578 11:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.578 11:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:07.578 11:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:07.578 11:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:07.578 11:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.578 11:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:07.839 11:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.839 11:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:07.839 11:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.839 11:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:07.839 11:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.839 11:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:07.839 11:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.839 11:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:08.099 11:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.099 11:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:08.099 11:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.099 11:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:08.359 11:04:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:08.359 11:04:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:08.359 11:04:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:08.359 11:04:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:08.619 11:04:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:09.558 11:04:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:09.558 11:04:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:09.558 11:04:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.558 11:04:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:09.817 11:04:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:09.817 11:04:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:09.817 11:04:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:09.817 11:04:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.076 11:04:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:10.076 11:04:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:10.076 11:04:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.076 11:04:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:10.076 11:04:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.076 11:04:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:10.076 11:04:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.076 11:04:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:10.336 11:04:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.336 11:04:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:10.336 11:04:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.336 11:04:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:10.597 11:04:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:10.597 11:04:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:10.597 11:04:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.597 11:04:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:10.597 11:04:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:10.597 11:04:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:10.597 11:04:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:10.857 11:04:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:11.117 11:04:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:12.059 11:04:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:12.059 11:04:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:12.059 11:04:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.059 11:04:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:12.059 11:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:12.059 11:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:12.059 11:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.059 11:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:12.318 11:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.319 11:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:12.319 11:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.319 11:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:12.577 11:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.577 11:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:12.577 11:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.577 11:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:12.577 11:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.577 11:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:12.577 11:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.577 11:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:12.837 11:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:12.837 11:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:12.837 11:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.837 11:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:13.096 11:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.096 11:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:13.096 11:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:13.096 11:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:13.357 11:04:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:13.357 11:04:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:14.744 11:04:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:14.744 11:04:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:14.744 11:04:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.744 11:04:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:14.744 11:04:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.744 11:04:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:14.744 11:04:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.744 11:04:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:14.744 11:04:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.744 11:04:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:14.744 11:04:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.744 11:04:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:15.004 11:04:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.004 11:04:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:15.004 11:04:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.004 11:04:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:15.265 11:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.265 11:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:15.265 11:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.265 11:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:15.265 11:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.265 11:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:15.265 11:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.265 11:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:15.525 11:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.525 11:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:15.525 11:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:15.784 11:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:15.784 11:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:17.169 11:04:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:17.169 11:04:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:17.169 11:04:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.169 11:04:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:17.169 11:04:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:17.169 11:04:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:17.169 11:04:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.169 11:04:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:17.169 11:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.169 11:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:17.169 11:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.169 11:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:17.430 11:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.430 11:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:17.430 11:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.430 11:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:17.430 11:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.430 11:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:17.430 11:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.690 11:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:17.690 11:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.690 11:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:17.690 11:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:17.690 11:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.949 11:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.949 11:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:17.949 11:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:17.949 11:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:18.209 11:04:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:19.150 11:04:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:19.150 11:04:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:19.150 11:04:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.150 11:04:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:19.410 11:04:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.410 11:04:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:19.410 11:04:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.410 11:04:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:19.671 11:04:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.671 11:04:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:19.671 11:04:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.671 11:04:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:19.671 11:04:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.671 11:04:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:19.671 11:04:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.671 11:04:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:19.932 11:04:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.932 11:04:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:19.932 11:04:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.932 11:04:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:20.192 11:04:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.192 11:04:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:20.192 11:04:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.192 11:04:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:20.192 11:04:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.192 11:04:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:20.192 11:04:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:20.453 11:04:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:20.714 11:04:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:21.656 11:04:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:21.656 11:04:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:21.656 11:04:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.656 11:04:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:21.656 11:04:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.656 11:04:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:21.656 11:04:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.657 11:04:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:21.918 11:04:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:21.918 11:04:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:21.918 11:04:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.918 11:04:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:22.198 11:04:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.198 11:04:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:22.198 11:04:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.198 11:04:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:22.198 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.198 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:22.198 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.198 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:22.464 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.464 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:22.464 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.464 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:22.728 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:22.728 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2225679 00:26:22.728 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2225679 ']' 00:26:22.728 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2225679 00:26:22.728 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:22.728 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:22.728 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2225679 00:26:22.728 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:22.728 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:22.728 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2225679' 00:26:22.728 killing process with pid 2225679 00:26:22.728 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2225679 00:26:22.728 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2225679 00:26:22.728 Connection closed with partial response: 00:26:22.728 00:26:22.728 00:26:22.728 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2225679 00:26:22.728 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:22.728 [2024-07-12 11:04:12.134306] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:22.728 [2024-07-12 11:04:12.134385] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2225679 ] 00:26:22.728 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.728 [2024-07-12 11:04:12.217049] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.728 [2024-07-12 11:04:12.308760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:22.728 Running I/O for 90 seconds... 00:26:22.728 [2024-07-12 11:04:25.282426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.728 [2024-07-12 11:04:25.282458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:22.728 [2024-07-12 11:04:25.282493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.728 [2024-07-12 11:04:25.282500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:22.728 [2024-07-12 11:04:25.282510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.728 [2024-07-12 11:04:25.282516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:22.728 [2024-07-12 11:04:25.282527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.728 [2024-07-12 11:04:25.282532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:22.728 [2024-07-12 11:04:25.282542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.728 [2024-07-12 11:04:25.282547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:22.728 [2024-07-12 11:04:25.282558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.728 [2024-07-12 11:04:25.282563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:22.728 [2024-07-12 11:04:25.282573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.728 [2024-07-12 11:04:25.282578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:22.728 [2024-07-12 11:04:25.282588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.728 [2024-07-12 11:04:25.282593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:22.728 [2024-07-12 11:04:25.282603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.728 [2024-07-12 11:04:25.282608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:22.728 [2024-07-12 11:04:25.282618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.728 [2024-07-12 11:04:25.282623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:22.728 [2024-07-12 11:04:25.282633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.728 [2024-07-12 11:04:25.282642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:22.728 [2024-07-12 11:04:25.282652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.728 [2024-07-12 11:04:25.282657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:22.728 [2024-07-12 11:04:25.282668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.728 [2024-07-12 11:04:25.282674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:22.728 [2024-07-12 11:04:25.282684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.728 [2024-07-12 11:04:25.282689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:22.728 [2024-07-12 11:04:25.282699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.728 [2024-07-12 11:04:25.282705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:22.728 [2024-07-12 11:04:25.282715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.728 [2024-07-12 11:04:25.282720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:22.728 [2024-07-12 11:04:25.282730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.728 [2024-07-12 11:04:25.282736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:22.728 [2024-07-12 11:04:25.282786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.728 [2024-07-12 11:04:25.282794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.728 [2024-07-12 11:04:25.282805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.728 [2024-07-12 11:04:25.282811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.728 [2024-07-12 11:04:25.282822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.728 [2024-07-12 11:04:25.282827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:22.728 [2024-07-12 11:04:25.282839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.728 [2024-07-12 11:04:25.282844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:22.728 [2024-07-12 11:04:25.282855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.728 [2024-07-12 11:04:25.282860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:22.728 [2024-07-12 11:04:25.282872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.728 [2024-07-12 11:04:25.282877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:22.728 [2024-07-12 11:04:25.282890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.728 [2024-07-12 11:04:25.282895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:22.728 [2024-07-12 11:04:25.282906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.728 [2024-07-12 11:04:25.282911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:22.728 [2024-07-12 11:04:25.282922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.728 [2024-07-12 11:04:25.282927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:22.728 [2024-07-12 11:04:25.282938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.728 [2024-07-12 11:04:25.282943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:22.728 [2024-07-12 11:04:25.282954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.728 [2024-07-12 11:04:25.282959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.282970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-12 11:04:25.282975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.282986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-12 11:04:25.282990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-12 11:04:25.283006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-12 11:04:25.283022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-12 11:04:25.283038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-12 11:04:25.283055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-12 11:04:25.283071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-12 11:04:25.283089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-12 11:04:25.283105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-12 11:04:25.283125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-12 11:04:25.283141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-12 11:04:25.283157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-12 11:04:25.283173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-12 11:04:25.283189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-12 11:04:25.283205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-12 11:04:25.283221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-12 11:04:25.283237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-12 11:04:25.283253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-12 11:04:25.283269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-12 11:04:25.283286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:111056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-12 11:04:25.283302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:111064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-12 11:04:25.283318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-12 11:04:25.283335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:111080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-12 11:04:25.283351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:111088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-12 11:04:25.283367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:111096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-12 11:04:25.283383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:111104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-12 11:04:25.283398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:111112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-12 11:04:25.283414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:111120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-12 11:04:25.283431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:111128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-12 11:04:25.283446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:111136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-12 11:04:25.283462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:111144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-12 11:04:25.283480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-12 11:04:25.283496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:111160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-12 11:04:25.283512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.729 [2024-07-12 11:04:25.283528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.729 [2024-07-12 11:04:25.283545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.729 [2024-07-12 11:04:25.283561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.729 [2024-07-12 11:04:25.283577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.729 [2024-07-12 11:04:25.283594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.729 [2024-07-12 11:04:25.283610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:22.729 [2024-07-12 11:04:25.283621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.283626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.283637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.283642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.283653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.283658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.283669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.283675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.283686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.283691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.283702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.283707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.283718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.283723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.283735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.283740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.283870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.283878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.283893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.283899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.283913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.283918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.283932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.283937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.283952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:110344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.283958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.283973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.283978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.283992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.283997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.284012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.284017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.284033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.284038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.284053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.284058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.284073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:110392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.284078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.284092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.284097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.284112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.284117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.284135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.284140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.284154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.284160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.284175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.284180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.284194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.284199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.284214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.284219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.284233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:110456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.284238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.284253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:110464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.284258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.284273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.284278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.284293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:110480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.284298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.284313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.284318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.284332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.284337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.284352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:110504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.284357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.284372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.284377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.284391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.284396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.284411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:110528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.284416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.284431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:110536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.284435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.284450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.284455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.284470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.284475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.284489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:110560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.284494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.284510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:110568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.284515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:22.730 [2024-07-12 11:04:25.284530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.730 [2024-07-12 11:04:25.284535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:25.284603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.731 [2024-07-12 11:04:25.284609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:25.284626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:110592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.731 [2024-07-12 11:04:25.284631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:25.284648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:110600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.731 [2024-07-12 11:04:25.284654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:25.284670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.731 [2024-07-12 11:04:25.284675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:25.284692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.731 [2024-07-12 11:04:25.284697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:25.284714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:110624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.731 [2024-07-12 11:04:25.284719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:25.284735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:110632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.731 [2024-07-12 11:04:25.284740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:25.284756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:110640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.731 [2024-07-12 11:04:25.284762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:25.284778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:110648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.731 [2024-07-12 11:04:25.284783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:25.284799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.731 [2024-07-12 11:04:25.284805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:25.284821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:110664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.731 [2024-07-12 11:04:25.284827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:25.284844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:111168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.731 [2024-07-12 11:04:25.284849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:25.284866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:111176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.731 [2024-07-12 11:04:25.284871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:25.284887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.731 [2024-07-12 11:04:25.284892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:25.284908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:111192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.731 [2024-07-12 11:04:25.284913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:25.284930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:111200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.731 [2024-07-12 11:04:25.284935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:25.284951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:111208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.731 [2024-07-12 11:04:25.284956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:25.284973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:111216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.731 [2024-07-12 11:04:25.284978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:37.429929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:52344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.731 [2024-07-12 11:04:37.429964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:37.429997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:52376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.731 [2024-07-12 11:04:37.430004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:37.430015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:52408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.731 [2024-07-12 11:04:37.430020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:37.430031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:53104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.731 [2024-07-12 11:04:37.430036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:37.430047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:53120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.731 [2024-07-12 11:04:37.430056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:37.430066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:53136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.731 [2024-07-12 11:04:37.430071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:37.430081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:53152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.731 [2024-07-12 11:04:37.430086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:37.430097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:53168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.731 [2024-07-12 11:04:37.430102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:37.430112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:53184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.731 [2024-07-12 11:04:37.430117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:37.430132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:53200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.731 [2024-07-12 11:04:37.430137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:37.430530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:53216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.731 [2024-07-12 11:04:37.430539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:37.430550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:53232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.731 [2024-07-12 11:04:37.430555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:37.430565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:53248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.731 [2024-07-12 11:04:37.430570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:37.430580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:53264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.731 [2024-07-12 11:04:37.430585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:37.430595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:53280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.731 [2024-07-12 11:04:37.430600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:37.430610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:52424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.731 [2024-07-12 11:04:37.430615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:37.430626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.731 [2024-07-12 11:04:37.430631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:37.430647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:52488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.731 [2024-07-12 11:04:37.430652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:37.430662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:52528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.731 [2024-07-12 11:04:37.430667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:37.430677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:52560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.731 [2024-07-12 11:04:37.430682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:37.430692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:52592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.731 [2024-07-12 11:04:37.430698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:37.430708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:52624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.731 [2024-07-12 11:04:37.430713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:37.430723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:52640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.731 [2024-07-12 11:04:37.430728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:22.731 [2024-07-12 11:04:37.430738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:52672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.732 [2024-07-12 11:04:37.430743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:22.732 [2024-07-12 11:04:37.430753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:52704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.732 [2024-07-12 11:04:37.430758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:22.732 [2024-07-12 11:04:37.430769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:53304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.732 [2024-07-12 11:04:37.430773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:22.732 [2024-07-12 11:04:37.430784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:52736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.732 [2024-07-12 11:04:37.430789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:22.732 [2024-07-12 11:04:37.430918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:52768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.732 [2024-07-12 11:04:37.430925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:22.732 [2024-07-12 11:04:37.430936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:52800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.732 [2024-07-12 11:04:37.430941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:22.732 [2024-07-12 11:04:37.430954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:52832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.732 [2024-07-12 11:04:37.430959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:22.732 [2024-07-12 11:04:37.430969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:52864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.732 [2024-07-12 11:04:37.430974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:22.732 [2024-07-12 11:04:37.430984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:52896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.732 [2024-07-12 11:04:37.430989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:22.732 [2024-07-12 11:04:37.430999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:52928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.732 [2024-07-12 11:04:37.431004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:22.732 [2024-07-12 11:04:37.431014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:52960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.732 [2024-07-12 11:04:37.431019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:22.732 [2024-07-12 11:04:37.431029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:52744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.732 [2024-07-12 11:04:37.431034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:22.732 [2024-07-12 11:04:37.431044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:52776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.732 [2024-07-12 11:04:37.431049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:22.732 [2024-07-12 11:04:37.431059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:52808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.732 [2024-07-12 11:04:37.431065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:22.732 [2024-07-12 11:04:37.431075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.732 [2024-07-12 11:04:37.431080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:22.732 [2024-07-12 11:04:37.431091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:52872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.732 [2024-07-12 11:04:37.431095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:22.732 [2024-07-12 11:04:37.431105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.732 [2024-07-12 11:04:37.431111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:22.732 [2024-07-12 11:04:37.431126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:52936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.732 [2024-07-12 11:04:37.431132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:22.732 [2024-07-12 11:04:37.431143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:52968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.732 [2024-07-12 11:04:37.431148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:22.732 [2024-07-12 11:04:37.431158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:53000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.732 [2024-07-12 11:04:37.431164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:22.732 [2024-07-12 11:04:37.431517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:53320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.732 [2024-07-12 11:04:37.431524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:22.732 [2024-07-12 11:04:37.431534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:53048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.732 [2024-07-12 11:04:37.431539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:22.732 [2024-07-12 11:04:37.431550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:52976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.732 [2024-07-12 11:04:37.431554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:22.732 [2024-07-12 11:04:37.431564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:53008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.732 [2024-07-12 11:04:37.431569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.732 [2024-07-12 11:04:37.431579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:53040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.732 [2024-07-12 11:04:37.431584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.732 [2024-07-12 11:04:37.431595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:53064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.732 [2024-07-12 11:04:37.431599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:22.732 [2024-07-12 11:04:37.431609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:53328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.732 [2024-07-12 11:04:37.431614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:22.732 Received shutdown signal, test time was about 25.631711 seconds 00:26:22.732 00:26:22.732 Latency(us) 00:26:22.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:22.732 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:22.732 Verification LBA range: start 0x0 length 0x4000 00:26:22.732 Nvme0n1 : 25.63 12198.73 47.65 0.00 0.00 10473.32 474.45 3019898.88 00:26:22.732 =================================================================================================================== 00:26:22.732 Total : 12198.73 47.65 0.00 0.00 10473.32 474.45 3019898.88 00:26:22.732 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:22.993 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:22.993 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:22.993 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:22.993 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:22.993 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:26:22.993 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:22.993 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:26:22.993 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:22.993 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:22.993 rmmod nvme_tcp 00:26:22.993 rmmod nvme_fabrics 00:26:22.993 rmmod nvme_keyring 00:26:22.993 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:22.993 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:26:22.993 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:26:22.993 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2225238 ']' 00:26:22.993 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2225238 00:26:22.993 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2225238 ']' 00:26:22.993 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2225238 00:26:22.993 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:22.993 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:22.993 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2225238 00:26:22.993 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:22.993 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:22.993 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2225238' 00:26:22.993 killing process with pid 2225238 00:26:22.993 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2225238 00:26:22.993 11:04:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2225238 00:26:23.254 11:04:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:23.254 11:04:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:23.254 11:04:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:23.254 11:04:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:23.254 11:04:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:23.254 11:04:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.254 11:04:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:23.254 11:04:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.801 11:04:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:25.801 00:26:25.801 real 0m39.619s 00:26:25.801 user 1m41.899s 00:26:25.801 sys 0m10.879s 00:26:25.801 11:04:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:25.801 11:04:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:25.801 ************************************ 00:26:25.801 END TEST nvmf_host_multipath_status 00:26:25.801 ************************************ 00:26:25.801 11:04:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:25.801 11:04:42 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:25.801 11:04:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:25.801 11:04:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:25.801 11:04:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:25.801 ************************************ 00:26:25.801 START TEST nvmf_discovery_remove_ifc 00:26:25.801 ************************************ 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:25.802 * Looking for test storage... 00:26:25.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:26:25.802 11:04:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:32.395 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:32.395 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:32.395 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:32.395 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:32.395 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:32.396 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:32.396 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:32.396 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:32.396 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:32.396 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:32.396 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:32.658 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:32.658 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:32.658 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:32.658 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:32.658 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:32.658 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:32.658 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:32.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:32.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.478 ms 00:26:32.658 00:26:32.658 --- 10.0.0.2 ping statistics --- 00:26:32.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.658 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:26:32.658 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:32.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:32.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.363 ms 00:26:32.920 00:26:32.920 --- 10.0.0.1 ping statistics --- 00:26:32.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.920 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:26:32.920 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:32.920 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:26:32.920 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:32.920 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:32.920 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:32.920 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:32.920 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:32.920 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:32.920 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:32.920 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:32.920 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:32.920 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:32.920 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:32.920 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2235438 00:26:32.920 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2235438 00:26:32.920 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:32.920 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2235438 ']' 00:26:32.920 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.920 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:32.920 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.920 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:32.920 11:04:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:32.920 [2024-07-12 11:04:49.760983] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:32.920 [2024-07-12 11:04:49.761060] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:32.920 EAL: No free 2048 kB hugepages reported on node 1 00:26:32.920 [2024-07-12 11:04:49.847507] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.181 [2024-07-12 11:04:49.939478] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:33.181 [2024-07-12 11:04:49.939534] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:33.181 [2024-07-12 11:04:49.939542] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:33.181 [2024-07-12 11:04:49.939548] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:33.181 [2024-07-12 11:04:49.939555] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:33.181 [2024-07-12 11:04:49.939579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:33.754 11:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:33.754 11:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:26:33.754 11:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:33.754 11:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:33.754 11:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:33.754 11:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:33.754 11:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:33.754 11:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.754 11:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:33.754 [2024-07-12 11:04:50.599940] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:33.754 [2024-07-12 11:04:50.608169] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:33.754 null0 00:26:33.754 [2024-07-12 11:04:50.640120] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:33.754 11:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.754 11:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2235484 00:26:33.754 11:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2235484 /tmp/host.sock 00:26:33.754 11:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:33.754 11:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2235484 ']' 00:26:33.754 11:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:26:33.754 11:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:33.754 11:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:33.754 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:33.754 11:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:33.754 11:04:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:33.754 [2024-07-12 11:04:50.715172] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:33.754 [2024-07-12 11:04:50.715236] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2235484 ] 00:26:34.015 EAL: No free 2048 kB hugepages reported on node 1 00:26:34.015 [2024-07-12 11:04:50.797575] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.015 [2024-07-12 11:04:50.894077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.589 11:04:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:34.589 11:04:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:26:34.589 11:04:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:34.589 11:04:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:34.589 11:04:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.589 11:04:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:34.589 11:04:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.589 11:04:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:34.589 11:04:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.589 11:04:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:34.850 11:04:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.850 11:04:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:34.850 11:04:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.850 11:04:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:35.793 [2024-07-12 11:04:52.678340] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:35.793 [2024-07-12 11:04:52.678362] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:35.793 [2024-07-12 11:04:52.678379] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:35.793 [2024-07-12 11:04:52.765649] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:36.054 [2024-07-12 11:04:52.994717] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:36.054 [2024-07-12 11:04:52.994769] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:36.054 [2024-07-12 11:04:52.994792] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:36.054 [2024-07-12 11:04:52.994806] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:36.054 [2024-07-12 11:04:52.994827] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:36.054 11:04:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.054 11:04:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:36.054 [2024-07-12 11:04:52.998353] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x9c27b0 was disconnected and freed. delete nvme_qpair. 00:26:36.054 11:04:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:36.054 11:04:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:36.054 11:04:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:36.054 11:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.054 11:04:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:36.054 11:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:36.054 11:04:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:36.054 11:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.314 11:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:36.314 11:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:36.314 11:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:36.314 11:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:36.314 11:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:36.314 11:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:36.314 11:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:36.314 11:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.314 11:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:36.314 11:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:36.314 11:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:36.314 11:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.314 11:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:36.314 11:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:37.698 11:04:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:37.698 11:04:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:37.698 11:04:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:37.698 11:04:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.698 11:04:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:37.698 11:04:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:37.698 11:04:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:37.698 11:04:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.698 11:04:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:37.698 11:04:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:38.639 11:04:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:38.639 11:04:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.639 11:04:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:38.639 11:04:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.639 11:04:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:38.639 11:04:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:38.639 11:04:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:38.639 11:04:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.639 11:04:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:38.639 11:04:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:39.681 11:04:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:39.681 11:04:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:39.681 11:04:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:39.681 11:04:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.681 11:04:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:39.681 11:04:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.681 11:04:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:39.681 11:04:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.681 11:04:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:39.681 11:04:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:40.623 11:04:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:40.623 11:04:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:40.623 11:04:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:40.623 11:04:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.623 11:04:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:40.623 11:04:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.623 11:04:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:40.623 11:04:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.623 11:04:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:40.623 11:04:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:41.564 [2024-07-12 11:04:58.435027] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:41.564 [2024-07-12 11:04:58.435066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.564 [2024-07-12 11:04:58.435075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.564 [2024-07-12 11:04:58.435084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.564 [2024-07-12 11:04:58.435090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.564 [2024-07-12 11:04:58.435095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.564 [2024-07-12 11:04:58.435100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.564 [2024-07-12 11:04:58.435106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.564 [2024-07-12 11:04:58.435111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.564 [2024-07-12 11:04:58.435117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.564 [2024-07-12 11:04:58.435125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.564 [2024-07-12 11:04:58.435130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x989040 is same with the state(5) to be set 00:26:41.564 [2024-07-12 11:04:58.445048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x989040 (9): Bad file descriptor 00:26:41.564 [2024-07-12 11:04:58.455085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:41.564 11:04:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:41.564 11:04:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:41.564 11:04:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:41.564 11:04:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:41.564 11:04:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.564 11:04:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.564 11:04:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:42.503 [2024-07-12 11:04:59.465209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:42.503 [2024-07-12 11:04:59.465297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x989040 with addr=10.0.0.2, port=4420 00:26:42.503 [2024-07-12 11:04:59.465327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x989040 is same with the state(5) to be set 00:26:42.503 [2024-07-12 11:04:59.465385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x989040 (9): Bad file descriptor 00:26:42.503 [2024-07-12 11:04:59.466484] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:42.503 [2024-07-12 11:04:59.466538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:42.503 [2024-07-12 11:04:59.466570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:42.503 [2024-07-12 11:04:59.466593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:42.503 [2024-07-12 11:04:59.466654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.503 [2024-07-12 11:04:59.466678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:42.503 11:04:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.763 11:04:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:42.763 11:04:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:43.703 [2024-07-12 11:05:00.469084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:43.703 [2024-07-12 11:05:00.469110] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:43.703 [2024-07-12 11:05:00.469116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:43.703 [2024-07-12 11:05:00.469126] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:26:43.703 [2024-07-12 11:05:00.469140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.703 [2024-07-12 11:05:00.469158] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:43.703 [2024-07-12 11:05:00.469180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:43.703 [2024-07-12 11:05:00.469188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.703 [2024-07-12 11:05:00.469196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:43.703 [2024-07-12 11:05:00.469201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.703 [2024-07-12 11:05:00.469206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:43.703 [2024-07-12 11:05:00.469212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.703 [2024-07-12 11:05:00.469218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:43.703 [2024-07-12 11:05:00.469223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.703 [2024-07-12 11:05:00.469228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:43.703 [2024-07-12 11:05:00.469233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.703 [2024-07-12 11:05:00.469239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:43.703 [2024-07-12 11:05:00.469949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9884c0 (9): Bad file descriptor 00:26:43.703 [2024-07-12 11:05:00.470959] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:43.703 [2024-07-12 11:05:00.470967] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:43.703 11:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:43.703 11:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:43.703 11:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:43.703 11:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.703 11:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:43.703 11:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:43.703 11:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:43.703 11:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.703 11:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:43.703 11:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:43.703 11:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:43.703 11:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:43.703 11:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:43.703 11:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:43.703 11:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:43.703 11:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.703 11:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:43.703 11:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:43.703 11:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:43.703 11:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.964 11:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:43.964 11:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:44.905 11:05:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:44.905 11:05:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:44.905 11:05:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:44.905 11:05:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.905 11:05:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:44.905 11:05:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:44.905 11:05:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:44.905 11:05:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.905 11:05:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:44.905 11:05:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:45.847 [2024-07-12 11:05:02.525322] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:45.847 [2024-07-12 11:05:02.525336] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:45.847 [2024-07-12 11:05:02.525348] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:45.847 [2024-07-12 11:05:02.614600] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:45.848 [2024-07-12 11:05:02.715993] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:45.848 [2024-07-12 11:05:02.716024] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:45.848 [2024-07-12 11:05:02.716038] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:45.848 [2024-07-12 11:05:02.716050] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:45.848 [2024-07-12 11:05:02.716059] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:45.848 [2024-07-12 11:05:02.722323] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x99f310 was disconnected and freed. delete nvme_qpair. 00:26:45.848 11:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:45.848 11:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:45.848 11:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:45.848 11:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.848 11:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:45.848 11:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:45.848 11:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:45.848 11:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.848 11:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:45.848 11:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:45.848 11:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2235484 00:26:45.848 11:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2235484 ']' 00:26:45.848 11:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2235484 00:26:45.848 11:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:26:45.848 11:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:45.848 11:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2235484 00:26:46.108 11:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:46.108 11:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:46.108 11:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2235484' 00:26:46.108 killing process with pid 2235484 00:26:46.108 11:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2235484 00:26:46.108 11:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2235484 00:26:46.108 11:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:46.108 11:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:46.108 11:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:26:46.108 11:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:46.108 11:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:26:46.108 11:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:46.108 11:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:46.108 rmmod nvme_tcp 00:26:46.108 rmmod nvme_fabrics 00:26:46.108 rmmod nvme_keyring 00:26:46.108 11:05:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:46.108 11:05:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:26:46.108 11:05:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:26:46.108 11:05:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2235438 ']' 00:26:46.108 11:05:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2235438 00:26:46.108 11:05:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2235438 ']' 00:26:46.108 11:05:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2235438 00:26:46.108 11:05:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:26:46.108 11:05:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:46.108 11:05:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2235438 00:26:46.369 11:05:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:46.369 11:05:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:46.369 11:05:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2235438' 00:26:46.369 killing process with pid 2235438 00:26:46.369 11:05:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2235438 00:26:46.369 11:05:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2235438 00:26:46.369 11:05:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:46.369 11:05:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:46.369 11:05:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:46.369 11:05:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:46.369 11:05:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:46.369 11:05:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.369 11:05:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:46.369 11:05:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.913 11:05:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:48.913 00:26:48.913 real 0m23.042s 00:26:48.913 user 0m27.307s 00:26:48.913 sys 0m6.789s 00:26:48.913 11:05:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:48.913 11:05:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:48.913 ************************************ 00:26:48.913 END TEST nvmf_discovery_remove_ifc 00:26:48.913 ************************************ 00:26:48.913 11:05:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:48.913 11:05:05 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:48.913 11:05:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:48.913 11:05:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:48.913 11:05:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:48.913 ************************************ 00:26:48.913 START TEST nvmf_identify_kernel_target 00:26:48.913 ************************************ 00:26:48.913 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:48.913 * Looking for test storage... 00:26:48.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:48.913 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:48.913 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:48.913 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:48.913 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:48.913 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:48.913 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:48.913 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:26:48.914 11:05:05 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:55.505 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:55.505 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:26:55.505 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:55.505 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:55.505 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:55.505 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:55.505 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:55.505 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:26:55.505 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:55.505 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:26:55.505 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:26:55.505 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:26:55.505 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:26:55.505 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:26:55.505 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:26:55.505 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:55.505 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:55.505 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:55.505 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:55.505 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:55.505 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:55.505 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:55.506 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:55.506 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:55.506 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:55.506 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:55.506 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:55.767 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:55.767 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:55.767 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:55.767 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:55.767 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:55.767 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:55.767 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:55.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:55.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.412 ms 00:26:55.767 00:26:55.767 --- 10.0.0.2 ping statistics --- 00:26:55.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.767 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:26:55.767 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:55.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:55.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.372 ms 00:26:55.767 00:26:55.767 --- 10.0.0.1 ping statistics --- 00:26:55.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.767 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:26:55.767 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:55.767 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:26:55.767 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:55.767 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:55.767 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:55.767 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:55.767 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:55.767 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:55.767 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:55.767 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:55.767 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:55.767 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:26:55.767 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:56.028 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:56.028 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.028 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.028 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:56.028 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.028 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:56.028 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:56.028 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:56.028 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:56.028 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:56.028 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:56.028 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:56.028 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:56.028 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:56.028 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:56.028 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:26:56.028 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:56.028 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:56.028 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:56.028 11:05:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:59.330 Waiting for block devices as requested 00:26:59.330 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:59.330 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:59.591 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:59.591 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:59.591 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:59.852 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:59.852 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:59.852 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:00.114 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:00.114 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:00.376 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:00.376 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:00.376 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:00.637 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:00.637 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:00.637 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:00.900 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:01.161 11:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:01.161 11:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:01.161 11:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:01.161 11:05:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:01.161 11:05:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:01.161 11:05:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:01.161 11:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:01.161 11:05:17 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:01.161 11:05:17 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:01.161 No valid GPT data, bailing 00:27:01.161 11:05:17 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:01.161 11:05:17 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:27:01.161 11:05:17 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:27:01.161 11:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:01.161 11:05:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:01.161 11:05:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:01.161 11:05:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:01.161 11:05:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:01.161 11:05:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:01.161 11:05:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:27:01.161 11:05:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:01.161 11:05:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:27:01.161 11:05:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:01.161 11:05:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:27:01.161 11:05:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:27:01.161 11:05:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:27:01.161 11:05:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:01.161 11:05:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:01.161 00:27:01.161 Discovery Log Number of Records 2, Generation counter 2 00:27:01.161 =====Discovery Log Entry 0====== 00:27:01.161 trtype: tcp 00:27:01.161 adrfam: ipv4 00:27:01.161 subtype: current discovery subsystem 00:27:01.161 treq: not specified, sq flow control disable supported 00:27:01.161 portid: 1 00:27:01.161 trsvcid: 4420 00:27:01.161 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:01.161 traddr: 10.0.0.1 00:27:01.161 eflags: none 00:27:01.161 sectype: none 00:27:01.161 =====Discovery Log Entry 1====== 00:27:01.161 trtype: tcp 00:27:01.161 adrfam: ipv4 00:27:01.161 subtype: nvme subsystem 00:27:01.161 treq: not specified, sq flow control disable supported 00:27:01.161 portid: 1 00:27:01.161 trsvcid: 4420 00:27:01.161 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:01.161 traddr: 10.0.0.1 00:27:01.161 eflags: none 00:27:01.161 sectype: none 00:27:01.161 11:05:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:01.161 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:01.424 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.424 ===================================================== 00:27:01.424 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:01.424 ===================================================== 00:27:01.424 Controller Capabilities/Features 00:27:01.424 ================================ 00:27:01.424 Vendor ID: 0000 00:27:01.424 Subsystem Vendor ID: 0000 00:27:01.424 Serial Number: 2075de364ea29db90aef 00:27:01.424 Model Number: Linux 00:27:01.424 Firmware Version: 6.7.0-68 00:27:01.424 Recommended Arb Burst: 0 00:27:01.424 IEEE OUI Identifier: 00 00 00 00:27:01.424 Multi-path I/O 00:27:01.424 May have multiple subsystem ports: No 00:27:01.424 May have multiple controllers: No 00:27:01.424 Associated with SR-IOV VF: No 00:27:01.424 Max Data Transfer Size: Unlimited 00:27:01.424 Max Number of Namespaces: 0 00:27:01.424 Max Number of I/O Queues: 1024 00:27:01.424 NVMe Specification Version (VS): 1.3 00:27:01.424 NVMe Specification Version (Identify): 1.3 00:27:01.424 Maximum Queue Entries: 1024 00:27:01.424 Contiguous Queues Required: No 00:27:01.424 Arbitration Mechanisms Supported 00:27:01.424 Weighted Round Robin: Not Supported 00:27:01.424 Vendor Specific: Not Supported 00:27:01.424 Reset Timeout: 7500 ms 00:27:01.424 Doorbell Stride: 4 bytes 00:27:01.424 NVM Subsystem Reset: Not Supported 00:27:01.424 Command Sets Supported 00:27:01.424 NVM Command Set: Supported 00:27:01.424 Boot Partition: Not Supported 00:27:01.424 Memory Page Size Minimum: 4096 bytes 00:27:01.424 Memory Page Size Maximum: 4096 bytes 00:27:01.424 Persistent Memory Region: Not Supported 00:27:01.424 Optional Asynchronous Events Supported 00:27:01.424 Namespace Attribute Notices: Not Supported 00:27:01.424 Firmware Activation Notices: Not Supported 00:27:01.424 ANA Change Notices: Not Supported 00:27:01.424 PLE Aggregate Log Change Notices: Not Supported 00:27:01.424 LBA Status Info Alert Notices: Not Supported 00:27:01.424 EGE Aggregate Log Change Notices: Not Supported 00:27:01.424 Normal NVM Subsystem Shutdown event: Not Supported 00:27:01.424 Zone Descriptor Change Notices: Not Supported 00:27:01.424 Discovery Log Change Notices: Supported 00:27:01.424 Controller Attributes 00:27:01.424 128-bit Host Identifier: Not Supported 00:27:01.424 Non-Operational Permissive Mode: Not Supported 00:27:01.424 NVM Sets: Not Supported 00:27:01.424 Read Recovery Levels: Not Supported 00:27:01.424 Endurance Groups: Not Supported 00:27:01.424 Predictable Latency Mode: Not Supported 00:27:01.424 Traffic Based Keep ALive: Not Supported 00:27:01.424 Namespace Granularity: Not Supported 00:27:01.424 SQ Associations: Not Supported 00:27:01.424 UUID List: Not Supported 00:27:01.424 Multi-Domain Subsystem: Not Supported 00:27:01.424 Fixed Capacity Management: Not Supported 00:27:01.424 Variable Capacity Management: Not Supported 00:27:01.424 Delete Endurance Group: Not Supported 00:27:01.424 Delete NVM Set: Not Supported 00:27:01.424 Extended LBA Formats Supported: Not Supported 00:27:01.424 Flexible Data Placement Supported: Not Supported 00:27:01.424 00:27:01.424 Controller Memory Buffer Support 00:27:01.424 ================================ 00:27:01.424 Supported: No 00:27:01.424 00:27:01.424 Persistent Memory Region Support 00:27:01.424 ================================ 00:27:01.424 Supported: No 00:27:01.424 00:27:01.424 Admin Command Set Attributes 00:27:01.424 ============================ 00:27:01.424 Security Send/Receive: Not Supported 00:27:01.424 Format NVM: Not Supported 00:27:01.424 Firmware Activate/Download: Not Supported 00:27:01.424 Namespace Management: Not Supported 00:27:01.424 Device Self-Test: Not Supported 00:27:01.424 Directives: Not Supported 00:27:01.424 NVMe-MI: Not Supported 00:27:01.424 Virtualization Management: Not Supported 00:27:01.424 Doorbell Buffer Config: Not Supported 00:27:01.424 Get LBA Status Capability: Not Supported 00:27:01.424 Command & Feature Lockdown Capability: Not Supported 00:27:01.424 Abort Command Limit: 1 00:27:01.424 Async Event Request Limit: 1 00:27:01.424 Number of Firmware Slots: N/A 00:27:01.424 Firmware Slot 1 Read-Only: N/A 00:27:01.424 Firmware Activation Without Reset: N/A 00:27:01.424 Multiple Update Detection Support: N/A 00:27:01.424 Firmware Update Granularity: No Information Provided 00:27:01.424 Per-Namespace SMART Log: No 00:27:01.424 Asymmetric Namespace Access Log Page: Not Supported 00:27:01.424 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:01.424 Command Effects Log Page: Not Supported 00:27:01.424 Get Log Page Extended Data: Supported 00:27:01.424 Telemetry Log Pages: Not Supported 00:27:01.424 Persistent Event Log Pages: Not Supported 00:27:01.424 Supported Log Pages Log Page: May Support 00:27:01.424 Commands Supported & Effects Log Page: Not Supported 00:27:01.424 Feature Identifiers & Effects Log Page:May Support 00:27:01.424 NVMe-MI Commands & Effects Log Page: May Support 00:27:01.424 Data Area 4 for Telemetry Log: Not Supported 00:27:01.424 Error Log Page Entries Supported: 1 00:27:01.424 Keep Alive: Not Supported 00:27:01.424 00:27:01.424 NVM Command Set Attributes 00:27:01.424 ========================== 00:27:01.424 Submission Queue Entry Size 00:27:01.424 Max: 1 00:27:01.424 Min: 1 00:27:01.424 Completion Queue Entry Size 00:27:01.424 Max: 1 00:27:01.424 Min: 1 00:27:01.424 Number of Namespaces: 0 00:27:01.424 Compare Command: Not Supported 00:27:01.424 Write Uncorrectable Command: Not Supported 00:27:01.424 Dataset Management Command: Not Supported 00:27:01.424 Write Zeroes Command: Not Supported 00:27:01.424 Set Features Save Field: Not Supported 00:27:01.424 Reservations: Not Supported 00:27:01.424 Timestamp: Not Supported 00:27:01.424 Copy: Not Supported 00:27:01.424 Volatile Write Cache: Not Present 00:27:01.424 Atomic Write Unit (Normal): 1 00:27:01.424 Atomic Write Unit (PFail): 1 00:27:01.424 Atomic Compare & Write Unit: 1 00:27:01.424 Fused Compare & Write: Not Supported 00:27:01.424 Scatter-Gather List 00:27:01.424 SGL Command Set: Supported 00:27:01.424 SGL Keyed: Not Supported 00:27:01.424 SGL Bit Bucket Descriptor: Not Supported 00:27:01.424 SGL Metadata Pointer: Not Supported 00:27:01.424 Oversized SGL: Not Supported 00:27:01.424 SGL Metadata Address: Not Supported 00:27:01.424 SGL Offset: Supported 00:27:01.424 Transport SGL Data Block: Not Supported 00:27:01.425 Replay Protected Memory Block: Not Supported 00:27:01.425 00:27:01.425 Firmware Slot Information 00:27:01.425 ========================= 00:27:01.425 Active slot: 0 00:27:01.425 00:27:01.425 00:27:01.425 Error Log 00:27:01.425 ========= 00:27:01.425 00:27:01.425 Active Namespaces 00:27:01.425 ================= 00:27:01.425 Discovery Log Page 00:27:01.425 ================== 00:27:01.425 Generation Counter: 2 00:27:01.425 Number of Records: 2 00:27:01.425 Record Format: 0 00:27:01.425 00:27:01.425 Discovery Log Entry 0 00:27:01.425 ---------------------- 00:27:01.425 Transport Type: 3 (TCP) 00:27:01.425 Address Family: 1 (IPv4) 00:27:01.425 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:01.425 Entry Flags: 00:27:01.425 Duplicate Returned Information: 0 00:27:01.425 Explicit Persistent Connection Support for Discovery: 0 00:27:01.425 Transport Requirements: 00:27:01.425 Secure Channel: Not Specified 00:27:01.425 Port ID: 1 (0x0001) 00:27:01.425 Controller ID: 65535 (0xffff) 00:27:01.425 Admin Max SQ Size: 32 00:27:01.425 Transport Service Identifier: 4420 00:27:01.425 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:01.425 Transport Address: 10.0.0.1 00:27:01.425 Discovery Log Entry 1 00:27:01.425 ---------------------- 00:27:01.425 Transport Type: 3 (TCP) 00:27:01.425 Address Family: 1 (IPv4) 00:27:01.425 Subsystem Type: 2 (NVM Subsystem) 00:27:01.425 Entry Flags: 00:27:01.425 Duplicate Returned Information: 0 00:27:01.425 Explicit Persistent Connection Support for Discovery: 0 00:27:01.425 Transport Requirements: 00:27:01.425 Secure Channel: Not Specified 00:27:01.425 Port ID: 1 (0x0001) 00:27:01.425 Controller ID: 65535 (0xffff) 00:27:01.425 Admin Max SQ Size: 32 00:27:01.425 Transport Service Identifier: 4420 00:27:01.425 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:01.425 Transport Address: 10.0.0.1 00:27:01.425 11:05:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:01.425 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.425 get_feature(0x01) failed 00:27:01.425 get_feature(0x02) failed 00:27:01.425 get_feature(0x04) failed 00:27:01.425 ===================================================== 00:27:01.425 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:01.425 ===================================================== 00:27:01.425 Controller Capabilities/Features 00:27:01.425 ================================ 00:27:01.425 Vendor ID: 0000 00:27:01.425 Subsystem Vendor ID: 0000 00:27:01.425 Serial Number: ddef85845e93c9cf51c0 00:27:01.425 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:01.425 Firmware Version: 6.7.0-68 00:27:01.425 Recommended Arb Burst: 6 00:27:01.425 IEEE OUI Identifier: 00 00 00 00:27:01.425 Multi-path I/O 00:27:01.425 May have multiple subsystem ports: Yes 00:27:01.425 May have multiple controllers: Yes 00:27:01.425 Associated with SR-IOV VF: No 00:27:01.425 Max Data Transfer Size: Unlimited 00:27:01.425 Max Number of Namespaces: 1024 00:27:01.425 Max Number of I/O Queues: 128 00:27:01.425 NVMe Specification Version (VS): 1.3 00:27:01.425 NVMe Specification Version (Identify): 1.3 00:27:01.425 Maximum Queue Entries: 1024 00:27:01.425 Contiguous Queues Required: No 00:27:01.425 Arbitration Mechanisms Supported 00:27:01.425 Weighted Round Robin: Not Supported 00:27:01.425 Vendor Specific: Not Supported 00:27:01.425 Reset Timeout: 7500 ms 00:27:01.425 Doorbell Stride: 4 bytes 00:27:01.425 NVM Subsystem Reset: Not Supported 00:27:01.425 Command Sets Supported 00:27:01.425 NVM Command Set: Supported 00:27:01.425 Boot Partition: Not Supported 00:27:01.425 Memory Page Size Minimum: 4096 bytes 00:27:01.425 Memory Page Size Maximum: 4096 bytes 00:27:01.425 Persistent Memory Region: Not Supported 00:27:01.425 Optional Asynchronous Events Supported 00:27:01.425 Namespace Attribute Notices: Supported 00:27:01.425 Firmware Activation Notices: Not Supported 00:27:01.425 ANA Change Notices: Supported 00:27:01.425 PLE Aggregate Log Change Notices: Not Supported 00:27:01.425 LBA Status Info Alert Notices: Not Supported 00:27:01.425 EGE Aggregate Log Change Notices: Not Supported 00:27:01.425 Normal NVM Subsystem Shutdown event: Not Supported 00:27:01.425 Zone Descriptor Change Notices: Not Supported 00:27:01.425 Discovery Log Change Notices: Not Supported 00:27:01.425 Controller Attributes 00:27:01.425 128-bit Host Identifier: Supported 00:27:01.425 Non-Operational Permissive Mode: Not Supported 00:27:01.425 NVM Sets: Not Supported 00:27:01.425 Read Recovery Levels: Not Supported 00:27:01.425 Endurance Groups: Not Supported 00:27:01.425 Predictable Latency Mode: Not Supported 00:27:01.425 Traffic Based Keep ALive: Supported 00:27:01.425 Namespace Granularity: Not Supported 00:27:01.425 SQ Associations: Not Supported 00:27:01.425 UUID List: Not Supported 00:27:01.425 Multi-Domain Subsystem: Not Supported 00:27:01.425 Fixed Capacity Management: Not Supported 00:27:01.425 Variable Capacity Management: Not Supported 00:27:01.425 Delete Endurance Group: Not Supported 00:27:01.425 Delete NVM Set: Not Supported 00:27:01.425 Extended LBA Formats Supported: Not Supported 00:27:01.425 Flexible Data Placement Supported: Not Supported 00:27:01.425 00:27:01.425 Controller Memory Buffer Support 00:27:01.425 ================================ 00:27:01.425 Supported: No 00:27:01.425 00:27:01.425 Persistent Memory Region Support 00:27:01.425 ================================ 00:27:01.425 Supported: No 00:27:01.425 00:27:01.425 Admin Command Set Attributes 00:27:01.425 ============================ 00:27:01.425 Security Send/Receive: Not Supported 00:27:01.425 Format NVM: Not Supported 00:27:01.425 Firmware Activate/Download: Not Supported 00:27:01.425 Namespace Management: Not Supported 00:27:01.425 Device Self-Test: Not Supported 00:27:01.425 Directives: Not Supported 00:27:01.425 NVMe-MI: Not Supported 00:27:01.425 Virtualization Management: Not Supported 00:27:01.425 Doorbell Buffer Config: Not Supported 00:27:01.425 Get LBA Status Capability: Not Supported 00:27:01.425 Command & Feature Lockdown Capability: Not Supported 00:27:01.425 Abort Command Limit: 4 00:27:01.425 Async Event Request Limit: 4 00:27:01.425 Number of Firmware Slots: N/A 00:27:01.425 Firmware Slot 1 Read-Only: N/A 00:27:01.425 Firmware Activation Without Reset: N/A 00:27:01.425 Multiple Update Detection Support: N/A 00:27:01.425 Firmware Update Granularity: No Information Provided 00:27:01.425 Per-Namespace SMART Log: Yes 00:27:01.425 Asymmetric Namespace Access Log Page: Supported 00:27:01.425 ANA Transition Time : 10 sec 00:27:01.425 00:27:01.425 Asymmetric Namespace Access Capabilities 00:27:01.425 ANA Optimized State : Supported 00:27:01.425 ANA Non-Optimized State : Supported 00:27:01.425 ANA Inaccessible State : Supported 00:27:01.425 ANA Persistent Loss State : Supported 00:27:01.425 ANA Change State : Supported 00:27:01.425 ANAGRPID is not changed : No 00:27:01.425 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:01.425 00:27:01.425 ANA Group Identifier Maximum : 128 00:27:01.425 Number of ANA Group Identifiers : 128 00:27:01.425 Max Number of Allowed Namespaces : 1024 00:27:01.425 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:01.425 Command Effects Log Page: Supported 00:27:01.425 Get Log Page Extended Data: Supported 00:27:01.425 Telemetry Log Pages: Not Supported 00:27:01.425 Persistent Event Log Pages: Not Supported 00:27:01.425 Supported Log Pages Log Page: May Support 00:27:01.425 Commands Supported & Effects Log Page: Not Supported 00:27:01.425 Feature Identifiers & Effects Log Page:May Support 00:27:01.425 NVMe-MI Commands & Effects Log Page: May Support 00:27:01.425 Data Area 4 for Telemetry Log: Not Supported 00:27:01.425 Error Log Page Entries Supported: 128 00:27:01.425 Keep Alive: Supported 00:27:01.425 Keep Alive Granularity: 1000 ms 00:27:01.425 00:27:01.425 NVM Command Set Attributes 00:27:01.425 ========================== 00:27:01.425 Submission Queue Entry Size 00:27:01.425 Max: 64 00:27:01.425 Min: 64 00:27:01.425 Completion Queue Entry Size 00:27:01.425 Max: 16 00:27:01.425 Min: 16 00:27:01.425 Number of Namespaces: 1024 00:27:01.425 Compare Command: Not Supported 00:27:01.425 Write Uncorrectable Command: Not Supported 00:27:01.425 Dataset Management Command: Supported 00:27:01.425 Write Zeroes Command: Supported 00:27:01.425 Set Features Save Field: Not Supported 00:27:01.425 Reservations: Not Supported 00:27:01.425 Timestamp: Not Supported 00:27:01.425 Copy: Not Supported 00:27:01.425 Volatile Write Cache: Present 00:27:01.425 Atomic Write Unit (Normal): 1 00:27:01.425 Atomic Write Unit (PFail): 1 00:27:01.425 Atomic Compare & Write Unit: 1 00:27:01.425 Fused Compare & Write: Not Supported 00:27:01.425 Scatter-Gather List 00:27:01.425 SGL Command Set: Supported 00:27:01.425 SGL Keyed: Not Supported 00:27:01.425 SGL Bit Bucket Descriptor: Not Supported 00:27:01.425 SGL Metadata Pointer: Not Supported 00:27:01.425 Oversized SGL: Not Supported 00:27:01.425 SGL Metadata Address: Not Supported 00:27:01.425 SGL Offset: Supported 00:27:01.425 Transport SGL Data Block: Not Supported 00:27:01.426 Replay Protected Memory Block: Not Supported 00:27:01.426 00:27:01.426 Firmware Slot Information 00:27:01.426 ========================= 00:27:01.426 Active slot: 0 00:27:01.426 00:27:01.426 Asymmetric Namespace Access 00:27:01.426 =========================== 00:27:01.426 Change Count : 0 00:27:01.426 Number of ANA Group Descriptors : 1 00:27:01.426 ANA Group Descriptor : 0 00:27:01.426 ANA Group ID : 1 00:27:01.426 Number of NSID Values : 1 00:27:01.426 Change Count : 0 00:27:01.426 ANA State : 1 00:27:01.426 Namespace Identifier : 1 00:27:01.426 00:27:01.426 Commands Supported and Effects 00:27:01.426 ============================== 00:27:01.426 Admin Commands 00:27:01.426 -------------- 00:27:01.426 Get Log Page (02h): Supported 00:27:01.426 Identify (06h): Supported 00:27:01.426 Abort (08h): Supported 00:27:01.426 Set Features (09h): Supported 00:27:01.426 Get Features (0Ah): Supported 00:27:01.426 Asynchronous Event Request (0Ch): Supported 00:27:01.426 Keep Alive (18h): Supported 00:27:01.426 I/O Commands 00:27:01.426 ------------ 00:27:01.426 Flush (00h): Supported 00:27:01.426 Write (01h): Supported LBA-Change 00:27:01.426 Read (02h): Supported 00:27:01.426 Write Zeroes (08h): Supported LBA-Change 00:27:01.426 Dataset Management (09h): Supported 00:27:01.426 00:27:01.426 Error Log 00:27:01.426 ========= 00:27:01.426 Entry: 0 00:27:01.426 Error Count: 0x3 00:27:01.426 Submission Queue Id: 0x0 00:27:01.426 Command Id: 0x5 00:27:01.426 Phase Bit: 0 00:27:01.426 Status Code: 0x2 00:27:01.426 Status Code Type: 0x0 00:27:01.426 Do Not Retry: 1 00:27:01.426 Error Location: 0x28 00:27:01.426 LBA: 0x0 00:27:01.426 Namespace: 0x0 00:27:01.426 Vendor Log Page: 0x0 00:27:01.426 ----------- 00:27:01.426 Entry: 1 00:27:01.426 Error Count: 0x2 00:27:01.426 Submission Queue Id: 0x0 00:27:01.426 Command Id: 0x5 00:27:01.426 Phase Bit: 0 00:27:01.426 Status Code: 0x2 00:27:01.426 Status Code Type: 0x0 00:27:01.426 Do Not Retry: 1 00:27:01.426 Error Location: 0x28 00:27:01.426 LBA: 0x0 00:27:01.426 Namespace: 0x0 00:27:01.426 Vendor Log Page: 0x0 00:27:01.426 ----------- 00:27:01.426 Entry: 2 00:27:01.426 Error Count: 0x1 00:27:01.426 Submission Queue Id: 0x0 00:27:01.426 Command Id: 0x4 00:27:01.426 Phase Bit: 0 00:27:01.426 Status Code: 0x2 00:27:01.426 Status Code Type: 0x0 00:27:01.426 Do Not Retry: 1 00:27:01.426 Error Location: 0x28 00:27:01.426 LBA: 0x0 00:27:01.426 Namespace: 0x0 00:27:01.426 Vendor Log Page: 0x0 00:27:01.426 00:27:01.426 Number of Queues 00:27:01.426 ================ 00:27:01.426 Number of I/O Submission Queues: 128 00:27:01.426 Number of I/O Completion Queues: 128 00:27:01.426 00:27:01.426 ZNS Specific Controller Data 00:27:01.426 ============================ 00:27:01.426 Zone Append Size Limit: 0 00:27:01.426 00:27:01.426 00:27:01.426 Active Namespaces 00:27:01.426 ================= 00:27:01.426 get_feature(0x05) failed 00:27:01.426 Namespace ID:1 00:27:01.426 Command Set Identifier: NVM (00h) 00:27:01.426 Deallocate: Supported 00:27:01.426 Deallocated/Unwritten Error: Not Supported 00:27:01.426 Deallocated Read Value: Unknown 00:27:01.426 Deallocate in Write Zeroes: Not Supported 00:27:01.426 Deallocated Guard Field: 0xFFFF 00:27:01.426 Flush: Supported 00:27:01.426 Reservation: Not Supported 00:27:01.426 Namespace Sharing Capabilities: Multiple Controllers 00:27:01.426 Size (in LBAs): 3750748848 (1788GiB) 00:27:01.426 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:01.426 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:01.426 UUID: 6c30c68e-6551-4aef-9b47-d3eec849b006 00:27:01.426 Thin Provisioning: Not Supported 00:27:01.426 Per-NS Atomic Units: Yes 00:27:01.426 Atomic Write Unit (Normal): 8 00:27:01.426 Atomic Write Unit (PFail): 8 00:27:01.426 Preferred Write Granularity: 8 00:27:01.426 Atomic Compare & Write Unit: 8 00:27:01.426 Atomic Boundary Size (Normal): 0 00:27:01.426 Atomic Boundary Size (PFail): 0 00:27:01.426 Atomic Boundary Offset: 0 00:27:01.426 NGUID/EUI64 Never Reused: No 00:27:01.426 ANA group ID: 1 00:27:01.426 Namespace Write Protected: No 00:27:01.426 Number of LBA Formats: 1 00:27:01.426 Current LBA Format: LBA Format #00 00:27:01.426 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:01.426 00:27:01.426 11:05:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:01.426 11:05:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:01.426 11:05:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:27:01.426 11:05:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:01.426 11:05:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:27:01.426 11:05:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:01.426 11:05:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:01.426 rmmod nvme_tcp 00:27:01.426 rmmod nvme_fabrics 00:27:01.426 11:05:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:01.426 11:05:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:27:01.426 11:05:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:27:01.426 11:05:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:01.426 11:05:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:01.426 11:05:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:01.426 11:05:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:01.426 11:05:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:01.426 11:05:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:01.426 11:05:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:01.426 11:05:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:01.426 11:05:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.972 11:05:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:03.972 11:05:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:03.972 11:05:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:03.972 11:05:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:27:03.972 11:05:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:03.972 11:05:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:03.972 11:05:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:03.972 11:05:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:03.972 11:05:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:03.972 11:05:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:03.972 11:05:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:07.276 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:07.276 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:07.276 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:07.276 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:07.276 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:07.276 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:07.276 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:07.276 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:07.276 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:07.276 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:07.276 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:07.276 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:07.276 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:07.276 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:07.276 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:07.276 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:07.276 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:07.850 00:27:07.850 real 0m19.165s 00:27:07.850 user 0m5.154s 00:27:07.850 sys 0m10.884s 00:27:07.850 11:05:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:07.850 11:05:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:07.850 ************************************ 00:27:07.850 END TEST nvmf_identify_kernel_target 00:27:07.850 ************************************ 00:27:07.850 11:05:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:07.850 11:05:24 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:07.850 11:05:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:07.850 11:05:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:07.850 11:05:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:07.850 ************************************ 00:27:07.850 START TEST nvmf_auth_host 00:27:07.850 ************************************ 00:27:07.850 11:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:07.850 * Looking for test storage... 00:27:07.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:07.850 11:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:07.850 11:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:07.850 11:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:07.850 11:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:07.850 11:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:07.850 11:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:07.850 11:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:07.850 11:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:07.850 11:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:07.850 11:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:07.850 11:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:07.850 11:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:07.850 11:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:07.850 11:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:07.850 11:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:07.850 11:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:07.850 11:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:07.850 11:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:07.850 11:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:07.850 11:05:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:07.850 11:05:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:07.850 11:05:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:07.850 11:05:24 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.850 11:05:24 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.850 11:05:24 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.850 11:05:24 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:07.850 11:05:24 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.850 11:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:27:07.850 11:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:07.850 11:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:07.850 11:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:07.850 11:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:07.850 11:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:07.850 11:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:07.851 11:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:07.851 11:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:07.851 11:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:07.851 11:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:07.851 11:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:07.851 11:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:07.851 11:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:07.851 11:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:07.851 11:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:07.851 11:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:07.851 11:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:07.851 11:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:07.851 11:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:07.851 11:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:07.851 11:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:07.851 11:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:07.851 11:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.851 11:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:07.851 11:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.851 11:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:07.851 11:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:07.851 11:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:27:07.851 11:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:15.995 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:15.995 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:15.995 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:15.995 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:15.995 11:05:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:15.995 11:05:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:15.995 11:05:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:15.995 11:05:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:15.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:15.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.528 ms 00:27:15.995 00:27:15.995 --- 10.0.0.2 ping statistics --- 00:27:15.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:15.995 rtt min/avg/max/mdev = 0.528/0.528/0.528/0.000 ms 00:27:15.995 11:05:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:15.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:15.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.366 ms 00:27:15.995 00:27:15.995 --- 10.0.0.1 ping statistics --- 00:27:15.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:15.995 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:27:15.996 11:05:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:15.996 11:05:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:27:15.996 11:05:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:15.996 11:05:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:15.996 11:05:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:15.996 11:05:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:15.996 11:05:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:15.996 11:05:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:15.996 11:05:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:15.996 11:05:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:15.996 11:05:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:15.996 11:05:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:15.996 11:05:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.996 11:05:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2249642 00:27:15.996 11:05:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2249642 00:27:15.996 11:05:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:15.996 11:05:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2249642 ']' 00:27:15.996 11:05:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:15.996 11:05:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:15.996 11:05:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:15.996 11:05:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:15.996 11:05:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.996 11:05:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:15.996 11:05:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:27:15.996 11:05:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:15.996 11:05:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:15.996 11:05:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.256 11:05:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:16.256 11:05:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:16.256 11:05:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:16.256 11:05:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:16.256 11:05:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:16.256 11:05:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:16.256 11:05:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:16.256 11:05:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:16.256 11:05:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f1e6ad00e6e568538d7c482831ac8fb4 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.btD 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f1e6ad00e6e568538d7c482831ac8fb4 0 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f1e6ad00e6e568538d7c482831ac8fb4 0 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f1e6ad00e6e568538d7c482831ac8fb4 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.btD 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.btD 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.btD 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2b98b57818a1386d2019068e47ab66873d4237ac3e9f5932c536fc934f6a03ec 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.7VO 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2b98b57818a1386d2019068e47ab66873d4237ac3e9f5932c536fc934f6a03ec 3 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2b98b57818a1386d2019068e47ab66873d4237ac3e9f5932c536fc934f6a03ec 3 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2b98b57818a1386d2019068e47ab66873d4237ac3e9f5932c536fc934f6a03ec 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.7VO 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.7VO 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.7VO 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=80b317fc84cfaee5d5804416838cda98aa53394872a7162f 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.9H1 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 80b317fc84cfaee5d5804416838cda98aa53394872a7162f 0 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 80b317fc84cfaee5d5804416838cda98aa53394872a7162f 0 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=80b317fc84cfaee5d5804416838cda98aa53394872a7162f 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.9H1 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.9H1 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.9H1 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3bb8ced92f39f49819b1b24843c9f4a915adef1c24d4a1b0 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.G4U 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3bb8ced92f39f49819b1b24843c9f4a915adef1c24d4a1b0 2 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3bb8ced92f39f49819b1b24843c9f4a915adef1c24d4a1b0 2 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3bb8ced92f39f49819b1b24843c9f4a915adef1c24d4a1b0 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:16.256 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:16.516 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.G4U 00:27:16.516 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.G4U 00:27:16.516 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.G4U 00:27:16.516 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:16.516 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:16.516 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:16.516 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:16.516 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:16.516 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:16.516 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:16.516 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=817aadddf246af9f2f8d501bc3edf144 00:27:16.516 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:16.516 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.1W1 00:27:16.516 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 817aadddf246af9f2f8d501bc3edf144 1 00:27:16.516 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 817aadddf246af9f2f8d501bc3edf144 1 00:27:16.516 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:16.516 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:16.516 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=817aadddf246af9f2f8d501bc3edf144 00:27:16.516 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:16.516 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:16.516 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.1W1 00:27:16.516 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.1W1 00:27:16.516 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.1W1 00:27:16.516 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:16.516 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:16.516 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:16.516 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:16.516 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:16.516 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:16.516 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:16.516 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ce517b30c58714176d52035061b888f3 00:27:16.516 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:16.516 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.ODc 00:27:16.516 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ce517b30c58714176d52035061b888f3 1 00:27:16.516 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ce517b30c58714176d52035061b888f3 1 00:27:16.516 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:16.516 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ce517b30c58714176d52035061b888f3 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.ODc 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.ODc 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.ODc 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a8aea65075f72b56aec7db68830f503ed49f4cf309cf0551 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.THB 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a8aea65075f72b56aec7db68830f503ed49f4cf309cf0551 2 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a8aea65075f72b56aec7db68830f503ed49f4cf309cf0551 2 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a8aea65075f72b56aec7db68830f503ed49f4cf309cf0551 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.THB 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.THB 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.THB 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ca79edb26048726d7554bd495fbf6b4f 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.v4M 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ca79edb26048726d7554bd495fbf6b4f 0 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ca79edb26048726d7554bd495fbf6b4f 0 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ca79edb26048726d7554bd495fbf6b4f 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:16.517 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:16.777 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.v4M 00:27:16.777 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.v4M 00:27:16.777 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.v4M 00:27:16.777 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:16.777 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:16.777 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:16.777 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:16.777 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:16.777 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:16.777 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:16.777 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=64019e09ee1a4345e16a2f7a193be5a0996ddfc0cc558ca7cdbd8df11f8db267 00:27:16.777 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:16.777 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.PPI 00:27:16.777 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 64019e09ee1a4345e16a2f7a193be5a0996ddfc0cc558ca7cdbd8df11f8db267 3 00:27:16.777 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 64019e09ee1a4345e16a2f7a193be5a0996ddfc0cc558ca7cdbd8df11f8db267 3 00:27:16.777 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:16.777 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:16.777 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=64019e09ee1a4345e16a2f7a193be5a0996ddfc0cc558ca7cdbd8df11f8db267 00:27:16.777 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:16.777 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:16.777 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.PPI 00:27:16.777 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.PPI 00:27:16.777 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.PPI 00:27:16.777 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:16.777 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2249642 00:27:16.777 11:05:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2249642 ']' 00:27:16.777 11:05:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:16.777 11:05:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:16.777 11:05:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:16.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:16.777 11:05:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:16.777 11:05:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.btD 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.7VO ]] 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7VO 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.9H1 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.G4U ]] 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.G4U 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.1W1 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.ODc ]] 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ODc 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.THB 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.v4M ]] 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.v4M 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.PPI 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:17.037 11:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:17.038 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:17.038 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:17.038 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:17.038 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:17.038 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:17.038 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:27:17.038 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:17.038 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:17.038 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:17.038 11:05:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:20.336 Waiting for block devices as requested 00:27:20.336 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:20.596 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:20.596 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:20.596 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:20.596 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:20.857 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:20.857 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:20.857 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:21.119 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:21.119 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:21.379 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:21.379 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:21.379 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:21.379 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:21.639 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:21.639 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:21.639 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:22.585 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:22.585 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:22.585 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:22.585 11:05:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:22.585 11:05:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:22.585 11:05:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:22.585 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:22.585 11:05:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:22.585 11:05:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:22.585 No valid GPT data, bailing 00:27:22.585 11:05:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:22.585 11:05:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:27:22.585 11:05:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:27:22.585 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:22.585 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:22.585 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:22.585 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:22.585 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:22.585 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:22.585 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:27:22.585 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:22.585 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:27:22.585 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:22.585 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:27:22.585 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:27:22.585 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:27:22.585 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:22.585 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:22.846 00:27:22.846 Discovery Log Number of Records 2, Generation counter 2 00:27:22.846 =====Discovery Log Entry 0====== 00:27:22.846 trtype: tcp 00:27:22.846 adrfam: ipv4 00:27:22.846 subtype: current discovery subsystem 00:27:22.846 treq: not specified, sq flow control disable supported 00:27:22.846 portid: 1 00:27:22.846 trsvcid: 4420 00:27:22.846 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:22.846 traddr: 10.0.0.1 00:27:22.846 eflags: none 00:27:22.846 sectype: none 00:27:22.846 =====Discovery Log Entry 1====== 00:27:22.846 trtype: tcp 00:27:22.846 adrfam: ipv4 00:27:22.846 subtype: nvme subsystem 00:27:22.846 treq: not specified, sq flow control disable supported 00:27:22.846 portid: 1 00:27:22.846 trsvcid: 4420 00:27:22.846 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:22.846 traddr: 10.0.0.1 00:27:22.846 eflags: none 00:27:22.846 sectype: none 00:27:22.846 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:22.846 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:22.846 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:22.846 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:22.846 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.846 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.846 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:22.846 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:22.846 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBiMzE3ZmM4NGNmYWVlNWQ1ODA0NDE2ODM4Y2RhOThhYTUzMzk0ODcyYTcxNjJmk7dyRA==: 00:27:22.846 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: 00:27:22.846 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.846 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:22.846 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBiMzE3ZmM4NGNmYWVlNWQ1ODA0NDE2ODM4Y2RhOThhYTUzMzk0ODcyYTcxNjJmk7dyRA==: 00:27:22.846 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: ]] 00:27:22.846 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: 00:27:22.846 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:22.846 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:22.846 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:22.846 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:22.846 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:22.846 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.846 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:22.846 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:22.846 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:22.846 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.846 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:22.846 11:05:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.846 11:05:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.847 11:05:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.847 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.847 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:22.847 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:22.847 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:22.847 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.847 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.847 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:22.847 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.847 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:22.847 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:22.847 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:22.847 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:22.847 11:05:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.847 11:05:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.847 nvme0n1 00:27:22.847 11:05:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.847 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.847 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.847 11:05:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.847 11:05:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.847 11:05:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.847 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.847 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.847 11:05:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.847 11:05:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFlNmFkMDBlNmU1Njg1MzhkN2M0ODI4MzFhYzhmYjSZxgqA: 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFlNmFkMDBlNmU1Njg1MzhkN2M0ODI4MzFhYzhmYjSZxgqA: 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: ]] 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.108 11:05:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.108 nvme0n1 00:27:23.108 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.108 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.108 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.108 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.108 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.108 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.108 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.108 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.108 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.108 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.108 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.108 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.108 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:23.108 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.108 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.108 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:23.108 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:23.108 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBiMzE3ZmM4NGNmYWVlNWQ1ODA0NDE2ODM4Y2RhOThhYTUzMzk0ODcyYTcxNjJmk7dyRA==: 00:27:23.108 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: 00:27:23.108 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.108 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:23.108 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBiMzE3ZmM4NGNmYWVlNWQ1ODA0NDE2ODM4Y2RhOThhYTUzMzk0ODcyYTcxNjJmk7dyRA==: 00:27:23.108 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: ]] 00:27:23.108 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: 00:27:23.108 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:23.108 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.108 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.108 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:23.108 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:23.108 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.108 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:23.108 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.108 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.108 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.369 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.369 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.369 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.369 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.369 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.369 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.369 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.369 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.369 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.369 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.369 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.369 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:23.369 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.369 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.369 nvme0n1 00:27:23.369 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.369 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.369 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.369 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.369 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.369 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.369 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.369 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.369 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.369 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.369 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.369 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.369 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:23.369 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.369 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.369 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:23.369 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:23.369 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODE3YWFkZGRmMjQ2YWY5ZjJmOGQ1MDFiYzNlZGYxNDQq+vIY: 00:27:23.369 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: 00:27:23.369 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.369 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:23.369 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODE3YWFkZGRmMjQ2YWY5ZjJmOGQ1MDFiYzNlZGYxNDQq+vIY: 00:27:23.369 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: ]] 00:27:23.370 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: 00:27:23.370 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:23.370 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.370 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.370 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:23.370 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:23.370 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.370 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:23.370 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.370 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.370 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.370 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.370 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.370 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.370 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.370 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.370 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.370 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.370 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.370 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.370 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.370 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.370 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:23.370 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.370 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.631 nvme0n1 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YThhZWE2NTA3NWY3MmI1NmFlYzdkYjY4ODMwZjUwM2VkNDlmNGNmMzA5Y2YwNTUxGyWpCw==: 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YThhZWE2NTA3NWY3MmI1NmFlYzdkYjY4ODMwZjUwM2VkNDlmNGNmMzA5Y2YwNTUxGyWpCw==: 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: ]] 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.631 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.632 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.632 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.632 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.632 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.632 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.632 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.632 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.632 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:23.632 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.632 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.893 nvme0n1 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjQwMTllMDllZTFhNDM0NWUxNmEyZjdhMTkzYmU1YTA5OTZkZGZjMGNjNTU4Y2E3Y2RiZDhkZjExZjhkYjI2N4Oukx4=: 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjQwMTllMDllZTFhNDM0NWUxNmEyZjdhMTkzYmU1YTA5OTZkZGZjMGNjNTU4Y2E3Y2RiZDhkZjExZjhkYjI2N4Oukx4=: 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.893 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.154 nvme0n1 00:27:24.154 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.155 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.155 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.155 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.155 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.155 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.155 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.155 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.155 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.155 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.155 11:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.155 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:24.155 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.155 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:24.155 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.155 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.155 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:24.155 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:24.155 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFlNmFkMDBlNmU1Njg1MzhkN2M0ODI4MzFhYzhmYjSZxgqA: 00:27:24.155 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: 00:27:24.155 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.155 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:24.155 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFlNmFkMDBlNmU1Njg1MzhkN2M0ODI4MzFhYzhmYjSZxgqA: 00:27:24.155 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: ]] 00:27:24.155 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: 00:27:24.155 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:24.155 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.155 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:24.155 11:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:24.155 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:24.155 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.155 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:24.155 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.155 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.155 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.155 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.155 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.155 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.155 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.155 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.155 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.155 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.155 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.155 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.155 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.155 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.155 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:24.155 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.155 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.416 nvme0n1 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBiMzE3ZmM4NGNmYWVlNWQ1ODA0NDE2ODM4Y2RhOThhYTUzMzk0ODcyYTcxNjJmk7dyRA==: 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBiMzE3ZmM4NGNmYWVlNWQ1ODA0NDE2ODM4Y2RhOThhYTUzMzk0ODcyYTcxNjJmk7dyRA==: 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: ]] 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.416 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.417 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.417 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.417 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.417 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.417 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.417 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.417 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:24.417 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.417 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.678 nvme0n1 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODE3YWFkZGRmMjQ2YWY5ZjJmOGQ1MDFiYzNlZGYxNDQq+vIY: 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODE3YWFkZGRmMjQ2YWY5ZjJmOGQ1MDFiYzNlZGYxNDQq+vIY: 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: ]] 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.678 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.939 nvme0n1 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YThhZWE2NTA3NWY3MmI1NmFlYzdkYjY4ODMwZjUwM2VkNDlmNGNmMzA5Y2YwNTUxGyWpCw==: 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YThhZWE2NTA3NWY3MmI1NmFlYzdkYjY4ODMwZjUwM2VkNDlmNGNmMzA5Y2YwNTUxGyWpCw==: 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: ]] 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.939 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.200 nvme0n1 00:27:25.200 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.200 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.200 11:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.200 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.200 11:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjQwMTllMDllZTFhNDM0NWUxNmEyZjdhMTkzYmU1YTA5OTZkZGZjMGNjNTU4Y2E3Y2RiZDhkZjExZjhkYjI2N4Oukx4=: 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjQwMTllMDllZTFhNDM0NWUxNmEyZjdhMTkzYmU1YTA5OTZkZGZjMGNjNTU4Y2E3Y2RiZDhkZjExZjhkYjI2N4Oukx4=: 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.200 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.461 nvme0n1 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFlNmFkMDBlNmU1Njg1MzhkN2M0ODI4MzFhYzhmYjSZxgqA: 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFlNmFkMDBlNmU1Njg1MzhkN2M0ODI4MzFhYzhmYjSZxgqA: 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: ]] 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.461 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.722 nvme0n1 00:27:25.722 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.722 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.722 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.722 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.722 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.722 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.722 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.722 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.722 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.722 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.722 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.722 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.722 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:25.723 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.723 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:25.723 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:25.723 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:25.723 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBiMzE3ZmM4NGNmYWVlNWQ1ODA0NDE2ODM4Y2RhOThhYTUzMzk0ODcyYTcxNjJmk7dyRA==: 00:27:25.723 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: 00:27:25.723 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:25.723 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:25.723 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBiMzE3ZmM4NGNmYWVlNWQ1ODA0NDE2ODM4Y2RhOThhYTUzMzk0ODcyYTcxNjJmk7dyRA==: 00:27:25.723 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: ]] 00:27:25.723 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: 00:27:25.723 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:25.723 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.723 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:25.723 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:25.723 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:25.723 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.723 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:25.723 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.723 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.723 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.723 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.723 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:25.723 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.723 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.723 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.723 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.723 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.723 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.723 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.723 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.723 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.723 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:25.723 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.723 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.984 nvme0n1 00:27:25.984 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.984 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.984 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.984 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.984 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.984 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.984 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.984 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.984 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.984 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODE3YWFkZGRmMjQ2YWY5ZjJmOGQ1MDFiYzNlZGYxNDQq+vIY: 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODE3YWFkZGRmMjQ2YWY5ZjJmOGQ1MDFiYzNlZGYxNDQq+vIY: 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: ]] 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.244 11:05:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.503 nvme0n1 00:27:26.503 11:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.503 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.503 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.503 11:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.503 11:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.503 11:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.503 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.503 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.503 11:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.503 11:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.503 11:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.503 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.503 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:26.503 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.503 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:26.503 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:26.503 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:26.503 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YThhZWE2NTA3NWY3MmI1NmFlYzdkYjY4ODMwZjUwM2VkNDlmNGNmMzA5Y2YwNTUxGyWpCw==: 00:27:26.503 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: 00:27:26.503 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:26.503 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:26.503 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YThhZWE2NTA3NWY3MmI1NmFlYzdkYjY4ODMwZjUwM2VkNDlmNGNmMzA5Y2YwNTUxGyWpCw==: 00:27:26.503 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: ]] 00:27:26.503 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: 00:27:26.503 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:26.503 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.504 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:26.504 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:26.504 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:26.504 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.504 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:26.504 11:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.504 11:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.504 11:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.504 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.504 11:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.504 11:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.504 11:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.504 11:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.504 11:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.504 11:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.504 11:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.504 11:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.504 11:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.504 11:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.504 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:26.504 11:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.504 11:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.764 nvme0n1 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjQwMTllMDllZTFhNDM0NWUxNmEyZjdhMTkzYmU1YTA5OTZkZGZjMGNjNTU4Y2E3Y2RiZDhkZjExZjhkYjI2N4Oukx4=: 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjQwMTllMDllZTFhNDM0NWUxNmEyZjdhMTkzYmU1YTA5OTZkZGZjMGNjNTU4Y2E3Y2RiZDhkZjExZjhkYjI2N4Oukx4=: 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.764 11:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.025 nvme0n1 00:27:27.025 11:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.025 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.025 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.025 11:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.025 11:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.025 11:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.025 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.025 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.025 11:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.025 11:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.025 11:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.025 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:27.025 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.025 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:27.025 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.025 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:27.025 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:27.025 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:27.025 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFlNmFkMDBlNmU1Njg1MzhkN2M0ODI4MzFhYzhmYjSZxgqA: 00:27:27.025 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: 00:27:27.025 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:27.025 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:27.025 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFlNmFkMDBlNmU1Njg1MzhkN2M0ODI4MzFhYzhmYjSZxgqA: 00:27:27.025 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: ]] 00:27:27.025 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: 00:27:27.025 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:27.025 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.025 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:27.025 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:27.025 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:27.025 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.025 11:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:27.025 11:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.025 11:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.025 11:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.290 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.290 11:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.290 11:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.290 11:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.290 11:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.290 11:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.290 11:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.290 11:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.290 11:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.290 11:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.290 11:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.290 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:27.290 11:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.290 11:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.590 nvme0n1 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBiMzE3ZmM4NGNmYWVlNWQ1ODA0NDE2ODM4Y2RhOThhYTUzMzk0ODcyYTcxNjJmk7dyRA==: 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBiMzE3ZmM4NGNmYWVlNWQ1ODA0NDE2ODM4Y2RhOThhYTUzMzk0ODcyYTcxNjJmk7dyRA==: 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: ]] 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.590 11:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.209 nvme0n1 00:27:28.209 11:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.209 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.209 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.209 11:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.209 11:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.209 11:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.209 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.209 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.209 11:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.209 11:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.209 11:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.209 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.209 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:28.209 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.209 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:28.209 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:28.209 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:28.209 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODE3YWFkZGRmMjQ2YWY5ZjJmOGQ1MDFiYzNlZGYxNDQq+vIY: 00:27:28.209 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: 00:27:28.209 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:28.209 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:28.209 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODE3YWFkZGRmMjQ2YWY5ZjJmOGQ1MDFiYzNlZGYxNDQq+vIY: 00:27:28.209 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: ]] 00:27:28.209 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: 00:27:28.209 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:28.209 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.209 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:28.209 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:28.209 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:28.209 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.209 11:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:28.209 11:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.209 11:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.209 11:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.209 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.209 11:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.209 11:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.209 11:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.209 11:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.209 11:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.209 11:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.209 11:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.209 11:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.209 11:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.209 11:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.209 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:28.209 11:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.209 11:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.469 nvme0n1 00:27:28.469 11:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.469 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.469 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.469 11:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.469 11:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.469 11:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YThhZWE2NTA3NWY3MmI1NmFlYzdkYjY4ODMwZjUwM2VkNDlmNGNmMzA5Y2YwNTUxGyWpCw==: 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YThhZWE2NTA3NWY3MmI1NmFlYzdkYjY4ODMwZjUwM2VkNDlmNGNmMzA5Y2YwNTUxGyWpCw==: 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: ]] 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.730 11:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.992 nvme0n1 00:27:28.992 11:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.992 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.992 11:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.992 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.992 11:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.992 11:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.992 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.992 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.992 11:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.992 11:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.992 11:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.992 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.992 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:28.992 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.992 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:28.992 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:28.992 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:28.992 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjQwMTllMDllZTFhNDM0NWUxNmEyZjdhMTkzYmU1YTA5OTZkZGZjMGNjNTU4Y2E3Y2RiZDhkZjExZjhkYjI2N4Oukx4=: 00:27:28.992 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:28.992 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:28.992 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:28.992 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjQwMTllMDllZTFhNDM0NWUxNmEyZjdhMTkzYmU1YTA5OTZkZGZjMGNjNTU4Y2E3Y2RiZDhkZjExZjhkYjI2N4Oukx4=: 00:27:28.992 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:28.992 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:28.992 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.992 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:28.992 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:28.992 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:28.992 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.992 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:28.992 11:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.992 11:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.254 11:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.254 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.254 11:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.254 11:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.254 11:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.254 11:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.254 11:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.254 11:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.254 11:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.254 11:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.254 11:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.254 11:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.254 11:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:29.254 11:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.254 11:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.514 nvme0n1 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFlNmFkMDBlNmU1Njg1MzhkN2M0ODI4MzFhYzhmYjSZxgqA: 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFlNmFkMDBlNmU1Njg1MzhkN2M0ODI4MzFhYzhmYjSZxgqA: 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: ]] 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.514 11:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.452 nvme0n1 00:27:30.452 11:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.452 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.452 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.452 11:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.452 11:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.452 11:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.452 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.452 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.452 11:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.452 11:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.452 11:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.452 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.452 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:30.452 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.452 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.452 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:30.452 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:30.452 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBiMzE3ZmM4NGNmYWVlNWQ1ODA0NDE2ODM4Y2RhOThhYTUzMzk0ODcyYTcxNjJmk7dyRA==: 00:27:30.452 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: 00:27:30.452 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.452 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:30.452 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBiMzE3ZmM4NGNmYWVlNWQ1ODA0NDE2ODM4Y2RhOThhYTUzMzk0ODcyYTcxNjJmk7dyRA==: 00:27:30.453 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: ]] 00:27:30.453 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: 00:27:30.453 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:30.453 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.453 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:30.453 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:30.453 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:30.453 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.453 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:30.453 11:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.453 11:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.453 11:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.453 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.453 11:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.453 11:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.453 11:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.453 11:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.453 11:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.453 11:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.453 11:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.453 11:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.453 11:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.453 11:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.453 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:30.453 11:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.453 11:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.024 nvme0n1 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODE3YWFkZGRmMjQ2YWY5ZjJmOGQ1MDFiYzNlZGYxNDQq+vIY: 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODE3YWFkZGRmMjQ2YWY5ZjJmOGQ1MDFiYzNlZGYxNDQq+vIY: 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: ]] 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.024 11:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.595 nvme0n1 00:27:31.595 11:05:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.595 11:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.595 11:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.595 11:05:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.595 11:05:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.595 11:05:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.595 11:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.595 11:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.595 11:05:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.595 11:05:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YThhZWE2NTA3NWY3MmI1NmFlYzdkYjY4ODMwZjUwM2VkNDlmNGNmMzA5Y2YwNTUxGyWpCw==: 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YThhZWE2NTA3NWY3MmI1NmFlYzdkYjY4ODMwZjUwM2VkNDlmNGNmMzA5Y2YwNTUxGyWpCw==: 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: ]] 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.856 11:05:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.427 nvme0n1 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjQwMTllMDllZTFhNDM0NWUxNmEyZjdhMTkzYmU1YTA5OTZkZGZjMGNjNTU4Y2E3Y2RiZDhkZjExZjhkYjI2N4Oukx4=: 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjQwMTllMDllZTFhNDM0NWUxNmEyZjdhMTkzYmU1YTA5OTZkZGZjMGNjNTU4Y2E3Y2RiZDhkZjExZjhkYjI2N4Oukx4=: 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.427 11:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.999 nvme0n1 00:27:32.999 11:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.999 11:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.999 11:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.999 11:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.999 11:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.999 11:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFlNmFkMDBlNmU1Njg1MzhkN2M0ODI4MzFhYzhmYjSZxgqA: 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFlNmFkMDBlNmU1Njg1MzhkN2M0ODI4MzFhYzhmYjSZxgqA: 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: ]] 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.260 nvme0n1 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.260 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.520 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.520 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.520 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.520 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.520 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.520 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.520 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:33.520 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.520 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.520 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:33.520 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBiMzE3ZmM4NGNmYWVlNWQ1ODA0NDE2ODM4Y2RhOThhYTUzMzk0ODcyYTcxNjJmk7dyRA==: 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBiMzE3ZmM4NGNmYWVlNWQ1ODA0NDE2ODM4Y2RhOThhYTUzMzk0ODcyYTcxNjJmk7dyRA==: 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: ]] 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.521 nvme0n1 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:33.521 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODE3YWFkZGRmMjQ2YWY5ZjJmOGQ1MDFiYzNlZGYxNDQq+vIY: 00:27:33.782 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: 00:27:33.782 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.782 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:33.782 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODE3YWFkZGRmMjQ2YWY5ZjJmOGQ1MDFiYzNlZGYxNDQq+vIY: 00:27:33.782 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: ]] 00:27:33.782 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: 00:27:33.782 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.783 nvme0n1 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YThhZWE2NTA3NWY3MmI1NmFlYzdkYjY4ODMwZjUwM2VkNDlmNGNmMzA5Y2YwNTUxGyWpCw==: 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YThhZWE2NTA3NWY3MmI1NmFlYzdkYjY4ODMwZjUwM2VkNDlmNGNmMzA5Y2YwNTUxGyWpCw==: 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: ]] 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.783 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.044 nvme0n1 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjQwMTllMDllZTFhNDM0NWUxNmEyZjdhMTkzYmU1YTA5OTZkZGZjMGNjNTU4Y2E3Y2RiZDhkZjExZjhkYjI2N4Oukx4=: 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjQwMTllMDllZTFhNDM0NWUxNmEyZjdhMTkzYmU1YTA5OTZkZGZjMGNjNTU4Y2E3Y2RiZDhkZjExZjhkYjI2N4Oukx4=: 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.044 11:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.304 nvme0n1 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFlNmFkMDBlNmU1Njg1MzhkN2M0ODI4MzFhYzhmYjSZxgqA: 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFlNmFkMDBlNmU1Njg1MzhkN2M0ODI4MzFhYzhmYjSZxgqA: 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: ]] 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.304 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.564 nvme0n1 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBiMzE3ZmM4NGNmYWVlNWQ1ODA0NDE2ODM4Y2RhOThhYTUzMzk0ODcyYTcxNjJmk7dyRA==: 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBiMzE3ZmM4NGNmYWVlNWQ1ODA0NDE2ODM4Y2RhOThhYTUzMzk0ODcyYTcxNjJmk7dyRA==: 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: ]] 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.564 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.825 nvme0n1 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODE3YWFkZGRmMjQ2YWY5ZjJmOGQ1MDFiYzNlZGYxNDQq+vIY: 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODE3YWFkZGRmMjQ2YWY5ZjJmOGQ1MDFiYzNlZGYxNDQq+vIY: 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: ]] 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.825 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.086 nvme0n1 00:27:35.086 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.086 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.086 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.086 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.086 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.086 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.086 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.086 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.086 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.086 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.086 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.086 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.086 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:35.086 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.086 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:35.086 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:35.086 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:35.086 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YThhZWE2NTA3NWY3MmI1NmFlYzdkYjY4ODMwZjUwM2VkNDlmNGNmMzA5Y2YwNTUxGyWpCw==: 00:27:35.086 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: 00:27:35.086 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:35.086 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:35.086 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YThhZWE2NTA3NWY3MmI1NmFlYzdkYjY4ODMwZjUwM2VkNDlmNGNmMzA5Y2YwNTUxGyWpCw==: 00:27:35.086 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: ]] 00:27:35.086 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: 00:27:35.086 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:35.086 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.086 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:35.086 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:35.086 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:35.086 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.086 11:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:35.086 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.086 11:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.086 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.086 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.086 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.086 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.086 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.086 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.086 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.086 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.086 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.086 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.086 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.086 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.086 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:35.086 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.086 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.345 nvme0n1 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjQwMTllMDllZTFhNDM0NWUxNmEyZjdhMTkzYmU1YTA5OTZkZGZjMGNjNTU4Y2E3Y2RiZDhkZjExZjhkYjI2N4Oukx4=: 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjQwMTllMDllZTFhNDM0NWUxNmEyZjdhMTkzYmU1YTA5OTZkZGZjMGNjNTU4Y2E3Y2RiZDhkZjExZjhkYjI2N4Oukx4=: 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.345 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.606 nvme0n1 00:27:35.606 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.606 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.606 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.606 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.606 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.606 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.606 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.606 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.606 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.606 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.606 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.606 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:35.606 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.606 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:35.606 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.606 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:35.606 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:35.606 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:35.606 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFlNmFkMDBlNmU1Njg1MzhkN2M0ODI4MzFhYzhmYjSZxgqA: 00:27:35.606 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: 00:27:35.607 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:35.607 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:35.607 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFlNmFkMDBlNmU1Njg1MzhkN2M0ODI4MzFhYzhmYjSZxgqA: 00:27:35.607 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: ]] 00:27:35.607 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: 00:27:35.607 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:35.607 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.607 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:35.607 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:35.607 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:35.607 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.607 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:35.607 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.607 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.607 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.607 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.607 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.607 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.607 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.607 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.607 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.607 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.607 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.607 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.607 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.607 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.607 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:35.607 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.607 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.865 nvme0n1 00:27:35.865 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.865 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.865 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.865 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.865 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.865 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.865 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.865 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.865 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.865 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBiMzE3ZmM4NGNmYWVlNWQ1ODA0NDE2ODM4Y2RhOThhYTUzMzk0ODcyYTcxNjJmk7dyRA==: 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBiMzE3ZmM4NGNmYWVlNWQ1ODA0NDE2ODM4Y2RhOThhYTUzMzk0ODcyYTcxNjJmk7dyRA==: 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: ]] 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.125 11:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.386 nvme0n1 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODE3YWFkZGRmMjQ2YWY5ZjJmOGQ1MDFiYzNlZGYxNDQq+vIY: 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODE3YWFkZGRmMjQ2YWY5ZjJmOGQ1MDFiYzNlZGYxNDQq+vIY: 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: ]] 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.386 11:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.647 nvme0n1 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YThhZWE2NTA3NWY3MmI1NmFlYzdkYjY4ODMwZjUwM2VkNDlmNGNmMzA5Y2YwNTUxGyWpCw==: 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YThhZWE2NTA3NWY3MmI1NmFlYzdkYjY4ODMwZjUwM2VkNDlmNGNmMzA5Y2YwNTUxGyWpCw==: 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: ]] 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.647 11:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.907 nvme0n1 00:27:36.907 11:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.907 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.907 11:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.907 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.907 11:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.907 11:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.907 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.907 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.907 11:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.907 11:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.907 11:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.907 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.907 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:36.907 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.907 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:36.907 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:36.907 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:36.907 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjQwMTllMDllZTFhNDM0NWUxNmEyZjdhMTkzYmU1YTA5OTZkZGZjMGNjNTU4Y2E3Y2RiZDhkZjExZjhkYjI2N4Oukx4=: 00:27:36.907 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:36.907 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:36.907 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:36.907 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjQwMTllMDllZTFhNDM0NWUxNmEyZjdhMTkzYmU1YTA5OTZkZGZjMGNjNTU4Y2E3Y2RiZDhkZjExZjhkYjI2N4Oukx4=: 00:27:36.907 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:36.907 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:36.907 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.907 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:36.907 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:36.907 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:36.907 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.907 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:36.907 11:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.907 11:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.167 11:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.167 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.167 11:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.167 11:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.167 11:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.167 11:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.167 11:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.167 11:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.167 11:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.167 11:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.167 11:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.167 11:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.167 11:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:37.167 11:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.167 11:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.427 nvme0n1 00:27:37.427 11:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.427 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.427 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFlNmFkMDBlNmU1Njg1MzhkN2M0ODI4MzFhYzhmYjSZxgqA: 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFlNmFkMDBlNmU1Njg1MzhkN2M0ODI4MzFhYzhmYjSZxgqA: 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: ]] 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.428 11:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.688 nvme0n1 00:27:37.688 11:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.688 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.688 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.688 11:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.688 11:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBiMzE3ZmM4NGNmYWVlNWQ1ODA0NDE2ODM4Y2RhOThhYTUzMzk0ODcyYTcxNjJmk7dyRA==: 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBiMzE3ZmM4NGNmYWVlNWQ1ODA0NDE2ODM4Y2RhOThhYTUzMzk0ODcyYTcxNjJmk7dyRA==: 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: ]] 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.949 11:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.209 nvme0n1 00:27:38.209 11:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.209 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.209 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.209 11:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.209 11:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.209 11:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.469 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.469 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.469 11:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.469 11:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.469 11:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.469 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.469 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:38.469 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.469 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:38.469 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:38.469 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:38.469 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODE3YWFkZGRmMjQ2YWY5ZjJmOGQ1MDFiYzNlZGYxNDQq+vIY: 00:27:38.469 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: 00:27:38.470 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:38.470 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:38.470 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODE3YWFkZGRmMjQ2YWY5ZjJmOGQ1MDFiYzNlZGYxNDQq+vIY: 00:27:38.470 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: ]] 00:27:38.470 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: 00:27:38.470 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:38.470 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.470 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:38.470 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:38.470 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:38.470 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.470 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:38.470 11:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.470 11:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.470 11:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.470 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.470 11:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.470 11:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.470 11:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.470 11:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.470 11:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.470 11:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.470 11:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.470 11:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.470 11:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.470 11:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.470 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:38.470 11:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.470 11:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.729 nvme0n1 00:27:38.729 11:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.729 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.729 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.729 11:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.729 11:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.729 11:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.729 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.729 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.729 11:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.729 11:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.018 11:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.018 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.018 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:39.019 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.019 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:39.019 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:39.019 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:39.019 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YThhZWE2NTA3NWY3MmI1NmFlYzdkYjY4ODMwZjUwM2VkNDlmNGNmMzA5Y2YwNTUxGyWpCw==: 00:27:39.019 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: 00:27:39.019 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:39.019 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:39.019 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YThhZWE2NTA3NWY3MmI1NmFlYzdkYjY4ODMwZjUwM2VkNDlmNGNmMzA5Y2YwNTUxGyWpCw==: 00:27:39.019 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: ]] 00:27:39.019 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: 00:27:39.019 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:39.019 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.019 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:39.019 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:39.019 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:39.019 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.019 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:39.019 11:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.019 11:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.019 11:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.019 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.019 11:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.019 11:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.019 11:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.019 11:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.019 11:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.019 11:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.019 11:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.019 11:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.019 11:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.019 11:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.019 11:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:39.019 11:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.019 11:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.279 nvme0n1 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjQwMTllMDllZTFhNDM0NWUxNmEyZjdhMTkzYmU1YTA5OTZkZGZjMGNjNTU4Y2E3Y2RiZDhkZjExZjhkYjI2N4Oukx4=: 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjQwMTllMDllZTFhNDM0NWUxNmEyZjdhMTkzYmU1YTA5OTZkZGZjMGNjNTU4Y2E3Y2RiZDhkZjExZjhkYjI2N4Oukx4=: 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.279 11:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.849 nvme0n1 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFlNmFkMDBlNmU1Njg1MzhkN2M0ODI4MzFhYzhmYjSZxgqA: 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFlNmFkMDBlNmU1Njg1MzhkN2M0ODI4MzFhYzhmYjSZxgqA: 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: ]] 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.849 11:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.419 nvme0n1 00:27:40.419 11:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.419 11:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.419 11:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.419 11:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.419 11:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.419 11:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBiMzE3ZmM4NGNmYWVlNWQ1ODA0NDE2ODM4Y2RhOThhYTUzMzk0ODcyYTcxNjJmk7dyRA==: 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBiMzE3ZmM4NGNmYWVlNWQ1ODA0NDE2ODM4Y2RhOThhYTUzMzk0ODcyYTcxNjJmk7dyRA==: 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: ]] 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.737 11:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.306 nvme0n1 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODE3YWFkZGRmMjQ2YWY5ZjJmOGQ1MDFiYzNlZGYxNDQq+vIY: 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODE3YWFkZGRmMjQ2YWY5ZjJmOGQ1MDFiYzNlZGYxNDQq+vIY: 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: ]] 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.307 11:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.875 nvme0n1 00:27:41.875 11:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.875 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.875 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.875 11:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.875 11:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.875 11:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YThhZWE2NTA3NWY3MmI1NmFlYzdkYjY4ODMwZjUwM2VkNDlmNGNmMzA5Y2YwNTUxGyWpCw==: 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YThhZWE2NTA3NWY3MmI1NmFlYzdkYjY4ODMwZjUwM2VkNDlmNGNmMzA5Y2YwNTUxGyWpCw==: 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: ]] 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.136 11:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.708 nvme0n1 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjQwMTllMDllZTFhNDM0NWUxNmEyZjdhMTkzYmU1YTA5OTZkZGZjMGNjNTU4Y2E3Y2RiZDhkZjExZjhkYjI2N4Oukx4=: 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjQwMTllMDllZTFhNDM0NWUxNmEyZjdhMTkzYmU1YTA5OTZkZGZjMGNjNTU4Y2E3Y2RiZDhkZjExZjhkYjI2N4Oukx4=: 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.708 11:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.651 nvme0n1 00:27:43.651 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.651 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.651 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.651 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFlNmFkMDBlNmU1Njg1MzhkN2M0ODI4MzFhYzhmYjSZxgqA: 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFlNmFkMDBlNmU1Njg1MzhkN2M0ODI4MzFhYzhmYjSZxgqA: 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: ]] 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.652 nvme0n1 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBiMzE3ZmM4NGNmYWVlNWQ1ODA0NDE2ODM4Y2RhOThhYTUzMzk0ODcyYTcxNjJmk7dyRA==: 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBiMzE3ZmM4NGNmYWVlNWQ1ODA0NDE2ODM4Y2RhOThhYTUzMzk0ODcyYTcxNjJmk7dyRA==: 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: ]] 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.652 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.913 nvme0n1 00:27:43.913 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.913 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.913 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.913 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.913 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.913 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.913 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.913 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.913 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.913 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.913 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.913 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.913 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:43.913 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.913 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.913 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:43.913 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:43.913 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODE3YWFkZGRmMjQ2YWY5ZjJmOGQ1MDFiYzNlZGYxNDQq+vIY: 00:27:43.913 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: 00:27:43.913 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.914 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:43.914 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODE3YWFkZGRmMjQ2YWY5ZjJmOGQ1MDFiYzNlZGYxNDQq+vIY: 00:27:43.914 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: ]] 00:27:43.914 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: 00:27:43.914 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:43.914 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.914 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.914 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:43.914 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:43.914 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.914 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:43.914 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.914 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.914 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.914 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.914 11:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.914 11:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.914 11:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.914 11:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.914 11:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.914 11:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.914 11:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.914 11:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.914 11:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.914 11:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.914 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:43.914 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.914 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.174 nvme0n1 00:27:44.175 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.175 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.175 11:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.175 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.175 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.175 11:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YThhZWE2NTA3NWY3MmI1NmFlYzdkYjY4ODMwZjUwM2VkNDlmNGNmMzA5Y2YwNTUxGyWpCw==: 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YThhZWE2NTA3NWY3MmI1NmFlYzdkYjY4ODMwZjUwM2VkNDlmNGNmMzA5Y2YwNTUxGyWpCw==: 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: ]] 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.175 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.436 nvme0n1 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjQwMTllMDllZTFhNDM0NWUxNmEyZjdhMTkzYmU1YTA5OTZkZGZjMGNjNTU4Y2E3Y2RiZDhkZjExZjhkYjI2N4Oukx4=: 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjQwMTllMDllZTFhNDM0NWUxNmEyZjdhMTkzYmU1YTA5OTZkZGZjMGNjNTU4Y2E3Y2RiZDhkZjExZjhkYjI2N4Oukx4=: 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.436 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.697 nvme0n1 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFlNmFkMDBlNmU1Njg1MzhkN2M0ODI4MzFhYzhmYjSZxgqA: 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFlNmFkMDBlNmU1Njg1MzhkN2M0ODI4MzFhYzhmYjSZxgqA: 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: ]] 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.697 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.957 nvme0n1 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBiMzE3ZmM4NGNmYWVlNWQ1ODA0NDE2ODM4Y2RhOThhYTUzMzk0ODcyYTcxNjJmk7dyRA==: 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBiMzE3ZmM4NGNmYWVlNWQ1ODA0NDE2ODM4Y2RhOThhYTUzMzk0ODcyYTcxNjJmk7dyRA==: 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: ]] 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.957 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.217 nvme0n1 00:27:45.217 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.217 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.217 11:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.217 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.217 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.217 11:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODE3YWFkZGRmMjQ2YWY5ZjJmOGQ1MDFiYzNlZGYxNDQq+vIY: 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODE3YWFkZGRmMjQ2YWY5ZjJmOGQ1MDFiYzNlZGYxNDQq+vIY: 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: ]] 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.217 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.477 nvme0n1 00:27:45.477 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.477 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.477 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.477 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.477 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.477 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.477 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.477 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.477 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.477 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.477 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.477 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.477 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:45.477 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.477 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:45.477 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:45.477 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:45.477 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YThhZWE2NTA3NWY3MmI1NmFlYzdkYjY4ODMwZjUwM2VkNDlmNGNmMzA5Y2YwNTUxGyWpCw==: 00:27:45.477 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: 00:27:45.477 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:45.477 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:45.477 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YThhZWE2NTA3NWY3MmI1NmFlYzdkYjY4ODMwZjUwM2VkNDlmNGNmMzA5Y2YwNTUxGyWpCw==: 00:27:45.477 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: ]] 00:27:45.477 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: 00:27:45.477 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:45.477 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.477 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:45.477 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:45.477 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:45.477 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.477 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:45.478 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.478 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.478 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.478 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.478 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.478 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.478 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.478 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.478 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.478 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.478 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.478 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.478 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.478 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.478 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:45.478 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.478 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.738 nvme0n1 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjQwMTllMDllZTFhNDM0NWUxNmEyZjdhMTkzYmU1YTA5OTZkZGZjMGNjNTU4Y2E3Y2RiZDhkZjExZjhkYjI2N4Oukx4=: 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjQwMTllMDllZTFhNDM0NWUxNmEyZjdhMTkzYmU1YTA5OTZkZGZjMGNjNTU4Y2E3Y2RiZDhkZjExZjhkYjI2N4Oukx4=: 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.738 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.999 nvme0n1 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFlNmFkMDBlNmU1Njg1MzhkN2M0ODI4MzFhYzhmYjSZxgqA: 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFlNmFkMDBlNmU1Njg1MzhkN2M0ODI4MzFhYzhmYjSZxgqA: 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: ]] 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.999 11:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.259 nvme0n1 00:27:46.259 11:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.259 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.259 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.259 11:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.259 11:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.259 11:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.259 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.259 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.259 11:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.259 11:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.259 11:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.259 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.259 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:46.259 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.259 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:46.259 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:46.259 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:46.259 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBiMzE3ZmM4NGNmYWVlNWQ1ODA0NDE2ODM4Y2RhOThhYTUzMzk0ODcyYTcxNjJmk7dyRA==: 00:27:46.259 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: 00:27:46.259 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:46.260 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:46.260 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBiMzE3ZmM4NGNmYWVlNWQ1ODA0NDE2ODM4Y2RhOThhYTUzMzk0ODcyYTcxNjJmk7dyRA==: 00:27:46.260 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: ]] 00:27:46.260 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: 00:27:46.260 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:46.260 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.260 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:46.260 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:46.260 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:46.260 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.260 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:46.260 11:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.260 11:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.260 11:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.260 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.260 11:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:46.260 11:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:46.260 11:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:46.260 11:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.260 11:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.260 11:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:46.260 11:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.260 11:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:46.260 11:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:46.260 11:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:46.260 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:46.260 11:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.260 11:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.520 nvme0n1 00:27:46.520 11:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.520 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.520 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.520 11:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.520 11:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.520 11:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.520 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.780 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.780 11:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.780 11:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.780 11:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.780 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.780 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:46.780 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.780 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:46.780 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:46.780 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:46.780 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODE3YWFkZGRmMjQ2YWY5ZjJmOGQ1MDFiYzNlZGYxNDQq+vIY: 00:27:46.780 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: 00:27:46.780 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:46.780 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:46.780 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODE3YWFkZGRmMjQ2YWY5ZjJmOGQ1MDFiYzNlZGYxNDQq+vIY: 00:27:46.780 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: ]] 00:27:46.780 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: 00:27:46.780 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:46.780 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.780 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:46.780 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:46.780 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:46.780 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.781 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:46.781 11:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.781 11:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.781 11:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.781 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.781 11:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:46.781 11:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:46.781 11:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:46.781 11:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.781 11:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.781 11:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:46.781 11:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.781 11:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:46.781 11:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:46.781 11:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:46.781 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:46.781 11:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.781 11:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.041 nvme0n1 00:27:47.041 11:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.041 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.041 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.041 11:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.041 11:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.041 11:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.041 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.041 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.041 11:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.041 11:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.041 11:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.041 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.041 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:47.041 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.041 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:47.041 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:47.041 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:47.041 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YThhZWE2NTA3NWY3MmI1NmFlYzdkYjY4ODMwZjUwM2VkNDlmNGNmMzA5Y2YwNTUxGyWpCw==: 00:27:47.041 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: 00:27:47.041 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:47.041 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:47.041 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YThhZWE2NTA3NWY3MmI1NmFlYzdkYjY4ODMwZjUwM2VkNDlmNGNmMzA5Y2YwNTUxGyWpCw==: 00:27:47.041 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: ]] 00:27:47.041 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: 00:27:47.042 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:47.042 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.042 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:47.042 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:47.042 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:47.042 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.042 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:47.042 11:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.042 11:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.042 11:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.042 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.042 11:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.042 11:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.042 11:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.042 11:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.042 11:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.042 11:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.042 11:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.042 11:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.042 11:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.042 11:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.042 11:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:47.042 11:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.042 11:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.302 nvme0n1 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjQwMTllMDllZTFhNDM0NWUxNmEyZjdhMTkzYmU1YTA5OTZkZGZjMGNjNTU4Y2E3Y2RiZDhkZjExZjhkYjI2N4Oukx4=: 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjQwMTllMDllZTFhNDM0NWUxNmEyZjdhMTkzYmU1YTA5OTZkZGZjMGNjNTU4Y2E3Y2RiZDhkZjExZjhkYjI2N4Oukx4=: 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.302 11:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.594 nvme0n1 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFlNmFkMDBlNmU1Njg1MzhkN2M0ODI4MzFhYzhmYjSZxgqA: 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFlNmFkMDBlNmU1Njg1MzhkN2M0ODI4MzFhYzhmYjSZxgqA: 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: ]] 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.594 11:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.192 nvme0n1 00:27:48.192 11:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.192 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.192 11:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.192 11:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.192 11:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.192 11:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBiMzE3ZmM4NGNmYWVlNWQ1ODA0NDE2ODM4Y2RhOThhYTUzMzk0ODcyYTcxNjJmk7dyRA==: 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBiMzE3ZmM4NGNmYWVlNWQ1ODA0NDE2ODM4Y2RhOThhYTUzMzk0ODcyYTcxNjJmk7dyRA==: 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: ]] 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.192 11:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.763 nvme0n1 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODE3YWFkZGRmMjQ2YWY5ZjJmOGQ1MDFiYzNlZGYxNDQq+vIY: 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODE3YWFkZGRmMjQ2YWY5ZjJmOGQ1MDFiYzNlZGYxNDQq+vIY: 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: ]] 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.763 11:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.024 nvme0n1 00:27:49.024 11:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.024 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.024 11:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.024 11:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.025 11:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.025 11:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YThhZWE2NTA3NWY3MmI1NmFlYzdkYjY4ODMwZjUwM2VkNDlmNGNmMzA5Y2YwNTUxGyWpCw==: 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YThhZWE2NTA3NWY3MmI1NmFlYzdkYjY4ODMwZjUwM2VkNDlmNGNmMzA5Y2YwNTUxGyWpCw==: 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: ]] 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.285 11:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.546 nvme0n1 00:27:49.546 11:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.546 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.546 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.546 11:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.546 11:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.546 11:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.546 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.546 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.546 11:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.546 11:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.546 11:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.546 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.546 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:49.546 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.546 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:49.546 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:49.546 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:49.546 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjQwMTllMDllZTFhNDM0NWUxNmEyZjdhMTkzYmU1YTA5OTZkZGZjMGNjNTU4Y2E3Y2RiZDhkZjExZjhkYjI2N4Oukx4=: 00:27:49.546 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:49.546 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:49.546 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:49.546 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjQwMTllMDllZTFhNDM0NWUxNmEyZjdhMTkzYmU1YTA5OTZkZGZjMGNjNTU4Y2E3Y2RiZDhkZjExZjhkYjI2N4Oukx4=: 00:27:49.546 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:49.546 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:49.546 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.546 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:49.546 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:49.806 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:49.806 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.806 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:49.806 11:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.806 11:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.806 11:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.806 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.806 11:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.806 11:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.806 11:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.806 11:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.806 11:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.806 11:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.806 11:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.806 11:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.806 11:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.806 11:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.806 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:49.806 11:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.806 11:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.066 nvme0n1 00:27:50.066 11:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.066 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.066 11:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.066 11:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.066 11:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.066 11:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFlNmFkMDBlNmU1Njg1MzhkN2M0ODI4MzFhYzhmYjSZxgqA: 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFlNmFkMDBlNmU1Njg1MzhkN2M0ODI4MzFhYzhmYjSZxgqA: 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: ]] 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmI5OGI1NzgxOGExMzg2ZDIwMTkwNjhlNDdhYjY2ODczZDQyMzdhYzNlOWY1OTMyYzUzNmZjOTM0ZjZhMDNlY5Z5+4Q=: 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.066 11:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.326 11:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.326 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:50.326 11:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.326 11:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.895 nvme0n1 00:27:50.895 11:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.895 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.895 11:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.895 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.895 11:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.895 11:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.895 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.895 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.895 11:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.895 11:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.895 11:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.895 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.895 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:50.895 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.895 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:50.896 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:50.896 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:50.896 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBiMzE3ZmM4NGNmYWVlNWQ1ODA0NDE2ODM4Y2RhOThhYTUzMzk0ODcyYTcxNjJmk7dyRA==: 00:27:50.896 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: 00:27:50.896 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:50.896 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:50.896 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBiMzE3ZmM4NGNmYWVlNWQ1ODA0NDE2ODM4Y2RhOThhYTUzMzk0ODcyYTcxNjJmk7dyRA==: 00:27:50.896 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: ]] 00:27:50.896 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: 00:27:50.896 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:50.896 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.896 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:50.896 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:50.896 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:50.896 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.896 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:50.896 11:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.896 11:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.896 11:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.896 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.896 11:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.896 11:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.896 11:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.896 11:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.896 11:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.896 11:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.896 11:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.896 11:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.896 11:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.896 11:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.896 11:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:50.896 11:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.896 11:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.468 nvme0n1 00:27:51.468 11:06:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.468 11:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.468 11:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.468 11:06:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.468 11:06:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.468 11:06:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODE3YWFkZGRmMjQ2YWY5ZjJmOGQ1MDFiYzNlZGYxNDQq+vIY: 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODE3YWFkZGRmMjQ2YWY5ZjJmOGQ1MDFiYzNlZGYxNDQq+vIY: 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: ]] 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U1MTdiMzBjNTg3MTQxNzZkNTIwMzUwNjFiODg4ZjNWuHbG: 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.729 11:06:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.301 nvme0n1 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YThhZWE2NTA3NWY3MmI1NmFlYzdkYjY4ODMwZjUwM2VkNDlmNGNmMzA5Y2YwNTUxGyWpCw==: 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YThhZWE2NTA3NWY3MmI1NmFlYzdkYjY4ODMwZjUwM2VkNDlmNGNmMzA5Y2YwNTUxGyWpCw==: 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: ]] 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2E3OWVkYjI2MDQ4NzI2ZDc1NTRiZDQ5NWZiZjZiNGYl5uif: 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.301 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:52.302 11:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.302 11:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.243 nvme0n1 00:27:53.243 11:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.243 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.243 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.243 11:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.243 11:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.243 11:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.243 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.243 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.243 11:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.243 11:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.243 11:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.243 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.243 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:53.243 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.243 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.243 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:53.243 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:53.243 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjQwMTllMDllZTFhNDM0NWUxNmEyZjdhMTkzYmU1YTA5OTZkZGZjMGNjNTU4Y2E3Y2RiZDhkZjExZjhkYjI2N4Oukx4=: 00:27:53.243 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:53.243 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.243 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:53.243 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjQwMTllMDllZTFhNDM0NWUxNmEyZjdhMTkzYmU1YTA5OTZkZGZjMGNjNTU4Y2E3Y2RiZDhkZjExZjhkYjI2N4Oukx4=: 00:27:53.243 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:53.243 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:53.243 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.243 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.243 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:53.243 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:53.243 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.243 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:53.243 11:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.243 11:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.243 11:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.243 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.243 11:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.243 11:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.244 11:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.244 11:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.244 11:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.244 11:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.244 11:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.244 11:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.244 11:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.244 11:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.244 11:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:53.244 11:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.244 11:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.817 nvme0n1 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBiMzE3ZmM4NGNmYWVlNWQ1ODA0NDE2ODM4Y2RhOThhYTUzMzk0ODcyYTcxNjJmk7dyRA==: 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBiMzE3ZmM4NGNmYWVlNWQ1ODA0NDE2ODM4Y2RhOThhYTUzMzk0ODcyYTcxNjJmk7dyRA==: 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: ]] 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2JiOGNlZDkyZjM5ZjQ5ODE5YjFiMjQ4NDNjOWY0YTkxNWFkZWYxYzI0ZDRhMWIwlvYYqg==: 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.817 request: 00:27:53.817 { 00:27:53.817 "name": "nvme0", 00:27:53.817 "trtype": "tcp", 00:27:53.817 "traddr": "10.0.0.1", 00:27:53.817 "adrfam": "ipv4", 00:27:53.817 "trsvcid": "4420", 00:27:53.817 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:53.817 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:53.817 "prchk_reftag": false, 00:27:53.817 "prchk_guard": false, 00:27:53.817 "hdgst": false, 00:27:53.817 "ddgst": false, 00:27:53.817 "method": "bdev_nvme_attach_controller", 00:27:53.817 "req_id": 1 00:27:53.817 } 00:27:53.817 Got JSON-RPC error response 00:27:53.817 response: 00:27:53.817 { 00:27:53.817 "code": -5, 00:27:53.817 "message": "Input/output error" 00:27:53.817 } 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:53.817 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:53.818 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:53.818 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:53.818 11:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.818 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.818 11:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:53.818 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.818 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.818 11:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:53.818 11:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:53.818 11:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.818 11:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.818 11:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.818 11:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.818 11:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.818 11:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.818 11:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.818 11:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.818 11:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.818 11:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.818 11:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:53.818 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:53.818 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:53.818 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:53.818 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:53.818 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:53.818 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:53.818 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:53.818 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.818 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.079 request: 00:27:54.079 { 00:27:54.079 "name": "nvme0", 00:27:54.079 "trtype": "tcp", 00:27:54.079 "traddr": "10.0.0.1", 00:27:54.079 "adrfam": "ipv4", 00:27:54.079 "trsvcid": "4420", 00:27:54.079 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:54.079 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:54.079 "prchk_reftag": false, 00:27:54.079 "prchk_guard": false, 00:27:54.079 "hdgst": false, 00:27:54.079 "ddgst": false, 00:27:54.079 "dhchap_key": "key2", 00:27:54.079 "method": "bdev_nvme_attach_controller", 00:27:54.079 "req_id": 1 00:27:54.079 } 00:27:54.079 Got JSON-RPC error response 00:27:54.079 response: 00:27:54.079 { 00:27:54.079 "code": -5, 00:27:54.079 "message": "Input/output error" 00:27:54.079 } 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.079 request: 00:27:54.079 { 00:27:54.079 "name": "nvme0", 00:27:54.079 "trtype": "tcp", 00:27:54.079 "traddr": "10.0.0.1", 00:27:54.079 "adrfam": "ipv4", 00:27:54.079 "trsvcid": "4420", 00:27:54.079 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:54.079 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:54.079 "prchk_reftag": false, 00:27:54.079 "prchk_guard": false, 00:27:54.079 "hdgst": false, 00:27:54.079 "ddgst": false, 00:27:54.079 "dhchap_key": "key1", 00:27:54.079 "dhchap_ctrlr_key": "ckey2", 00:27:54.079 "method": "bdev_nvme_attach_controller", 00:27:54.079 "req_id": 1 00:27:54.079 } 00:27:54.079 Got JSON-RPC error response 00:27:54.079 response: 00:27:54.079 { 00:27:54.079 "code": -5, 00:27:54.079 "message": "Input/output error" 00:27:54.079 } 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:54.079 11:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:54.079 rmmod nvme_tcp 00:27:54.079 rmmod nvme_fabrics 00:27:54.079 11:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:54.079 11:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:27:54.079 11:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:27:54.080 11:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2249642 ']' 00:27:54.080 11:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2249642 00:27:54.080 11:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 2249642 ']' 00:27:54.080 11:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 2249642 00:27:54.080 11:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:27:54.080 11:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:54.080 11:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2249642 00:27:54.340 11:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:54.340 11:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:54.340 11:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2249642' 00:27:54.340 killing process with pid 2249642 00:27:54.340 11:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 2249642 00:27:54.340 11:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 2249642 00:27:54.340 11:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:54.340 11:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:54.340 11:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:54.340 11:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:54.340 11:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:54.340 11:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:54.340 11:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:54.340 11:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.884 11:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:56.884 11:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:56.884 11:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:56.884 11:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:56.884 11:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:56.884 11:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:27:56.884 11:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:56.884 11:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:56.884 11:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:56.884 11:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:56.884 11:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:56.884 11:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:56.884 11:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:00.186 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:00.186 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:00.186 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:00.186 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:00.186 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:00.186 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:00.186 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:00.186 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:00.186 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:00.186 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:00.186 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:00.186 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:00.186 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:00.186 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:00.186 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:00.186 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:00.186 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:00.447 11:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.btD /tmp/spdk.key-null.9H1 /tmp/spdk.key-sha256.1W1 /tmp/spdk.key-sha384.THB /tmp/spdk.key-sha512.PPI /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:00.447 11:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:03.753 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:03.753 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:03.753 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:03.753 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:03.753 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:03.753 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:03.753 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:03.753 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:03.753 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:03.753 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:03.753 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:03.753 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:03.753 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:03.753 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:03.753 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:03.753 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:03.753 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:04.327 00:28:04.327 real 0m56.411s 00:28:04.327 user 0m50.226s 00:28:04.327 sys 0m15.345s 00:28:04.327 11:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:04.327 11:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.327 ************************************ 00:28:04.327 END TEST nvmf_auth_host 00:28:04.327 ************************************ 00:28:04.327 11:06:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:04.327 11:06:21 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:28:04.327 11:06:21 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:04.327 11:06:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:04.327 11:06:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:04.327 11:06:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:04.327 ************************************ 00:28:04.327 START TEST nvmf_digest 00:28:04.327 ************************************ 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:04.327 * Looking for test storage... 00:28:04.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:04.327 11:06:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:28:04.328 11:06:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:12.480 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:12.480 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:28:12.480 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:12.480 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:12.480 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:12.480 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:12.480 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:12.480 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:28:12.480 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:12.480 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:28:12.480 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:28:12.480 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:28:12.480 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:28:12.480 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:28:12.480 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:28:12.480 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:12.480 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:12.480 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:12.480 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:12.480 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:12.480 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:12.480 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:12.480 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:12.480 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:12.480 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:12.480 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:12.480 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:12.480 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:12.480 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:12.480 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:12.480 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:12.480 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:12.481 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:12.481 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:12.481 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:12.481 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:12.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:12.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.563 ms 00:28:12.481 00:28:12.481 --- 10.0.0.2 ping statistics --- 00:28:12.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.481 rtt min/avg/max/mdev = 0.563/0.563/0.563/0.000 ms 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:12.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:12.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.367 ms 00:28:12.481 00:28:12.481 --- 10.0.0.1 ping statistics --- 00:28:12.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.481 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:12.481 ************************************ 00:28:12.481 START TEST nvmf_digest_clean 00:28:12.481 ************************************ 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2266494 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2266494 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2266494 ']' 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:12.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:12.481 11:06:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:12.481 [2024-07-12 11:06:28.712470] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:12.481 [2024-07-12 11:06:28.712533] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:12.481 EAL: No free 2048 kB hugepages reported on node 1 00:28:12.481 [2024-07-12 11:06:28.801704] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.481 [2024-07-12 11:06:28.895106] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:12.481 [2024-07-12 11:06:28.895174] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:12.481 [2024-07-12 11:06:28.895183] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:12.481 [2024-07-12 11:06:28.895190] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:12.481 [2024-07-12 11:06:28.895197] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:12.481 [2024-07-12 11:06:28.895222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:12.742 11:06:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:12.742 11:06:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:12.742 11:06:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:12.742 11:06:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:12.742 11:06:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:12.742 11:06:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:12.742 11:06:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:12.742 11:06:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:12.742 11:06:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:12.742 11:06:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.742 11:06:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:12.742 null0 00:28:12.742 [2024-07-12 11:06:29.652843] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:12.742 [2024-07-12 11:06:29.677103] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:12.742 11:06:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.742 11:06:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:12.742 11:06:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:12.742 11:06:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:12.742 11:06:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:12.742 11:06:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:12.742 11:06:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:12.742 11:06:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:12.742 11:06:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2266546 00:28:12.742 11:06:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2266546 /var/tmp/bperf.sock 00:28:12.742 11:06:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2266546 ']' 00:28:12.742 11:06:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:12.742 11:06:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:12.742 11:06:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:12.742 11:06:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:12.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:12.742 11:06:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:12.742 11:06:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:13.003 [2024-07-12 11:06:29.735316] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:13.003 [2024-07-12 11:06:29.735380] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2266546 ] 00:28:13.003 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.003 [2024-07-12 11:06:29.817247] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.003 [2024-07-12 11:06:29.912226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:13.575 11:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:13.575 11:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:13.575 11:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:13.575 11:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:13.575 11:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:13.835 11:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:13.835 11:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:14.418 nvme0n1 00:28:14.418 11:06:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:14.419 11:06:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:14.419 Running I/O for 2 seconds... 00:28:16.969 00:28:16.969 Latency(us) 00:28:16.969 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.969 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:16.969 nvme0n1 : 2.00 20967.30 81.90 0.00 0.00 6097.11 2416.64 11414.19 00:28:16.969 =================================================================================================================== 00:28:16.969 Total : 20967.30 81.90 0.00 0.00 6097.11 2416.64 11414.19 00:28:16.969 0 00:28:16.969 11:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:16.969 11:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:16.969 11:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:16.969 11:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:16.969 | select(.opcode=="crc32c") 00:28:16.969 | "\(.module_name) \(.executed)"' 00:28:16.969 11:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:16.969 11:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:16.969 11:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:16.969 11:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:16.969 11:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:16.969 11:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2266546 00:28:16.969 11:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2266546 ']' 00:28:16.969 11:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2266546 00:28:16.969 11:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:16.969 11:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:16.969 11:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2266546 00:28:16.969 11:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:16.969 11:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:16.969 11:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2266546' 00:28:16.969 killing process with pid 2266546 00:28:16.969 11:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2266546 00:28:16.969 Received shutdown signal, test time was about 2.000000 seconds 00:28:16.969 00:28:16.969 Latency(us) 00:28:16.969 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.969 =================================================================================================================== 00:28:16.969 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:16.969 11:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2266546 00:28:16.969 11:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:16.969 11:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:16.969 11:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:16.969 11:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:16.969 11:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:16.969 11:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:16.969 11:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:16.969 11:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2267375 00:28:16.969 11:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2267375 /var/tmp/bperf.sock 00:28:16.969 11:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2267375 ']' 00:28:16.969 11:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:16.969 11:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:16.969 11:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:16.969 11:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:16.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:16.969 11:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:16.969 11:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:16.969 [2024-07-12 11:06:33.742837] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:16.970 [2024-07-12 11:06:33.742893] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2267375 ] 00:28:16.970 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:16.970 Zero copy mechanism will not be used. 00:28:16.970 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.970 [2024-07-12 11:06:33.817655] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.970 [2024-07-12 11:06:33.870417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.540 11:06:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:17.540 11:06:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:17.540 11:06:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:17.540 11:06:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:17.540 11:06:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:17.800 11:06:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:17.800 11:06:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:18.059 nvme0n1 00:28:18.059 11:06:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:18.059 11:06:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:18.319 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:18.319 Zero copy mechanism will not be used. 00:28:18.319 Running I/O for 2 seconds... 00:28:20.230 00:28:20.230 Latency(us) 00:28:20.230 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.230 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:20.230 nvme0n1 : 2.05 2672.44 334.06 0.00 0.00 5873.12 1925.12 46093.65 00:28:20.230 =================================================================================================================== 00:28:20.230 Total : 2672.44 334.06 0.00 0.00 5873.12 1925.12 46093.65 00:28:20.230 0 00:28:20.230 11:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:20.230 11:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:20.230 11:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:20.230 11:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:20.230 | select(.opcode=="crc32c") 00:28:20.230 | "\(.module_name) \(.executed)"' 00:28:20.230 11:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:20.491 11:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:20.491 11:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:20.491 11:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:20.491 11:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:20.491 11:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2267375 00:28:20.491 11:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2267375 ']' 00:28:20.491 11:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2267375 00:28:20.491 11:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:20.491 11:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:20.491 11:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2267375 00:28:20.491 11:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:20.491 11:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:20.491 11:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2267375' 00:28:20.491 killing process with pid 2267375 00:28:20.491 11:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2267375 00:28:20.491 Received shutdown signal, test time was about 2.000000 seconds 00:28:20.491 00:28:20.491 Latency(us) 00:28:20.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.491 =================================================================================================================== 00:28:20.491 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:20.491 11:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2267375 00:28:20.751 11:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:20.751 11:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:20.751 11:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:20.751 11:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:20.751 11:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:20.751 11:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:20.751 11:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:20.751 11:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2268199 00:28:20.751 11:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2268199 /var/tmp/bperf.sock 00:28:20.751 11:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2268199 ']' 00:28:20.751 11:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:20.751 11:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:20.751 11:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:20.751 11:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:20.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:20.751 11:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:20.751 11:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:20.751 [2024-07-12 11:06:37.561982] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:20.751 [2024-07-12 11:06:37.562037] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2268199 ] 00:28:20.751 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.751 [2024-07-12 11:06:37.636132] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.751 [2024-07-12 11:06:37.689156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.691 11:06:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:21.691 11:06:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:21.691 11:06:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:21.691 11:06:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:21.691 11:06:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:21.691 11:06:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:21.691 11:06:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:21.952 nvme0n1 00:28:21.952 11:06:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:21.952 11:06:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:22.212 Running I/O for 2 seconds... 00:28:24.125 00:28:24.125 Latency(us) 00:28:24.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.125 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:24.125 nvme0n1 : 2.00 30072.74 117.47 0.00 0.00 4249.44 3959.47 10977.28 00:28:24.125 =================================================================================================================== 00:28:24.125 Total : 30072.74 117.47 0.00 0.00 4249.44 3959.47 10977.28 00:28:24.125 0 00:28:24.125 11:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:24.125 11:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:24.125 11:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:24.125 11:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:24.125 | select(.opcode=="crc32c") 00:28:24.125 | "\(.module_name) \(.executed)"' 00:28:24.125 11:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:24.419 11:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:24.419 11:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:24.419 11:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:24.419 11:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:24.419 11:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2268199 00:28:24.419 11:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2268199 ']' 00:28:24.419 11:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2268199 00:28:24.419 11:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:24.419 11:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:24.419 11:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2268199 00:28:24.419 11:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:24.419 11:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:24.419 11:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2268199' 00:28:24.419 killing process with pid 2268199 00:28:24.419 11:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2268199 00:28:24.419 Received shutdown signal, test time was about 2.000000 seconds 00:28:24.419 00:28:24.419 Latency(us) 00:28:24.419 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.419 =================================================================================================================== 00:28:24.419 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:24.419 11:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2268199 00:28:24.419 11:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:24.419 11:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:24.419 11:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:24.419 11:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:24.419 11:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:24.419 11:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:24.419 11:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:24.419 11:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2268897 00:28:24.419 11:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2268897 /var/tmp/bperf.sock 00:28:24.419 11:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2268897 ']' 00:28:24.419 11:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:24.419 11:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:24.419 11:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:24.419 11:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:24.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:24.419 11:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:24.419 11:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:24.707 [2024-07-12 11:06:41.419450] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:24.707 [2024-07-12 11:06:41.419506] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2268897 ] 00:28:24.707 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:24.707 Zero copy mechanism will not be used. 00:28:24.707 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.707 [2024-07-12 11:06:41.494722] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.707 [2024-07-12 11:06:41.547221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.277 11:06:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:25.277 11:06:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:25.277 11:06:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:25.277 11:06:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:25.277 11:06:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:25.537 11:06:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:25.537 11:06:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:25.797 nvme0n1 00:28:25.797 11:06:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:25.797 11:06:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:25.797 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:25.797 Zero copy mechanism will not be used. 00:28:25.797 Running I/O for 2 seconds... 00:28:28.342 00:28:28.342 Latency(us) 00:28:28.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:28.342 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:28.342 nvme0n1 : 2.00 4880.20 610.03 0.00 0.00 3274.09 1570.13 11086.51 00:28:28.342 =================================================================================================================== 00:28:28.342 Total : 4880.20 610.03 0.00 0.00 3274.09 1570.13 11086.51 00:28:28.342 0 00:28:28.342 11:06:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:28.342 11:06:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:28.342 11:06:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:28.342 11:06:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:28.342 | select(.opcode=="crc32c") 00:28:28.342 | "\(.module_name) \(.executed)"' 00:28:28.342 11:06:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:28.342 11:06:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:28.342 11:06:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:28.342 11:06:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:28.342 11:06:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:28.342 11:06:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2268897 00:28:28.342 11:06:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2268897 ']' 00:28:28.342 11:06:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2268897 00:28:28.342 11:06:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:28.342 11:06:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:28.342 11:06:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2268897 00:28:28.342 11:06:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:28.342 11:06:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:28.342 11:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2268897' 00:28:28.342 killing process with pid 2268897 00:28:28.342 11:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2268897 00:28:28.342 Received shutdown signal, test time was about 2.000000 seconds 00:28:28.342 00:28:28.342 Latency(us) 00:28:28.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:28.342 =================================================================================================================== 00:28:28.342 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:28.342 11:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2268897 00:28:28.342 11:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2266494 00:28:28.342 11:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2266494 ']' 00:28:28.342 11:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2266494 00:28:28.342 11:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:28.342 11:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:28.342 11:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2266494 00:28:28.342 11:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:28.342 11:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:28.342 11:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2266494' 00:28:28.342 killing process with pid 2266494 00:28:28.342 11:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2266494 00:28:28.342 11:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2266494 00:28:28.342 00:28:28.342 real 0m16.637s 00:28:28.342 user 0m32.440s 00:28:28.342 sys 0m3.598s 00:28:28.342 11:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:28.342 11:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:28.342 ************************************ 00:28:28.342 END TEST nvmf_digest_clean 00:28:28.342 ************************************ 00:28:28.342 11:06:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:28:28.342 11:06:45 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:28.342 11:06:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:28.342 11:06:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:28.342 11:06:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:28.603 ************************************ 00:28:28.603 START TEST nvmf_digest_error 00:28:28.603 ************************************ 00:28:28.603 11:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:28:28.603 11:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:28.603 11:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:28.603 11:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:28.603 11:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:28.603 11:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2269611 00:28:28.603 11:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2269611 00:28:28.603 11:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:28.603 11:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2269611 ']' 00:28:28.603 11:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:28.603 11:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:28.603 11:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:28.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:28.603 11:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:28.603 11:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:28.603 [2024-07-12 11:06:45.426834] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:28.603 [2024-07-12 11:06:45.426887] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:28.603 EAL: No free 2048 kB hugepages reported on node 1 00:28:28.603 [2024-07-12 11:06:45.508481] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.603 [2024-07-12 11:06:45.567858] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:28.603 [2024-07-12 11:06:45.567890] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:28.603 [2024-07-12 11:06:45.567895] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:28.603 [2024-07-12 11:06:45.567900] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:28.603 [2024-07-12 11:06:45.567904] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:28.603 [2024-07-12 11:06:45.567920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.545 11:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:29.545 11:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:29.545 11:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:29.545 11:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:29.545 11:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:29.545 11:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:29.545 11:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:29.545 11:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.545 11:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:29.545 [2024-07-12 11:06:46.225741] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:29.545 11:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.545 11:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:29.545 11:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:29.545 11:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.545 11:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:29.545 null0 00:28:29.545 [2024-07-12 11:06:46.301965] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:29.545 [2024-07-12 11:06:46.326150] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:29.545 11:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.545 11:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:29.545 11:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:29.545 11:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:29.545 11:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:29.545 11:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:29.545 11:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2269897 00:28:29.545 11:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2269897 /var/tmp/bperf.sock 00:28:29.545 11:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2269897 ']' 00:28:29.545 11:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:29.545 11:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:29.545 11:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:29.545 11:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:29.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:29.545 11:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:29.545 11:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:29.545 [2024-07-12 11:06:46.379848] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:29.545 [2024-07-12 11:06:46.379894] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2269897 ] 00:28:29.545 EAL: No free 2048 kB hugepages reported on node 1 00:28:29.545 [2024-07-12 11:06:46.453595] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.545 [2024-07-12 11:06:46.507028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:30.488 11:06:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:30.488 11:06:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:30.488 11:06:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:30.488 11:06:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:30.488 11:06:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:30.488 11:06:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.488 11:06:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:30.488 11:06:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.488 11:06:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:30.488 11:06:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:30.749 nvme0n1 00:28:30.749 11:06:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:30.749 11:06:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.749 11:06:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:31.010 11:06:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.010 11:06:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:31.010 11:06:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:31.010 Running I/O for 2 seconds... 00:28:31.010 [2024-07-12 11:06:47.831221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.010 [2024-07-12 11:06:47.831252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.010 [2024-07-12 11:06:47.831261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.010 [2024-07-12 11:06:47.842017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.010 [2024-07-12 11:06:47.842037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.010 [2024-07-12 11:06:47.842048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.010 [2024-07-12 11:06:47.850210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.010 [2024-07-12 11:06:47.850228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.010 [2024-07-12 11:06:47.850235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.010 [2024-07-12 11:06:47.859098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.010 [2024-07-12 11:06:47.859115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.010 [2024-07-12 11:06:47.859126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.011 [2024-07-12 11:06:47.868816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.011 [2024-07-12 11:06:47.868835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.011 [2024-07-12 11:06:47.868841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.011 [2024-07-12 11:06:47.877131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.011 [2024-07-12 11:06:47.877149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.011 [2024-07-12 11:06:47.877156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.011 [2024-07-12 11:06:47.887278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.011 [2024-07-12 11:06:47.887296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.011 [2024-07-12 11:06:47.887302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.011 [2024-07-12 11:06:47.895937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.011 [2024-07-12 11:06:47.895954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.011 [2024-07-12 11:06:47.895961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.011 [2024-07-12 11:06:47.904510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.011 [2024-07-12 11:06:47.904527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.011 [2024-07-12 11:06:47.904533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.011 [2024-07-12 11:06:47.913881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.011 [2024-07-12 11:06:47.913899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.011 [2024-07-12 11:06:47.913905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.011 [2024-07-12 11:06:47.922498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.011 [2024-07-12 11:06:47.922519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.011 [2024-07-12 11:06:47.922525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.011 [2024-07-12 11:06:47.931052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.011 [2024-07-12 11:06:47.931069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.011 [2024-07-12 11:06:47.931075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.011 [2024-07-12 11:06:47.939699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.011 [2024-07-12 11:06:47.939716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.011 [2024-07-12 11:06:47.939722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.011 [2024-07-12 11:06:47.947888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.011 [2024-07-12 11:06:47.947905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.011 [2024-07-12 11:06:47.947912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.011 [2024-07-12 11:06:47.956984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.011 [2024-07-12 11:06:47.957001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.011 [2024-07-12 11:06:47.957007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.011 [2024-07-12 11:06:47.965290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.011 [2024-07-12 11:06:47.965307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.011 [2024-07-12 11:06:47.965313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.011 [2024-07-12 11:06:47.973998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.011 [2024-07-12 11:06:47.974015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.011 [2024-07-12 11:06:47.974021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.011 [2024-07-12 11:06:47.982786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.011 [2024-07-12 11:06:47.982803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.011 [2024-07-12 11:06:47.982810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.011 [2024-07-12 11:06:47.991539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.011 [2024-07-12 11:06:47.991556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.011 [2024-07-12 11:06:47.991563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.273 [2024-07-12 11:06:48.000330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.273 [2024-07-12 11:06:48.000348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.273 [2024-07-12 11:06:48.000354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.273 [2024-07-12 11:06:48.008898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.273 [2024-07-12 11:06:48.008916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.273 [2024-07-12 11:06:48.008922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.273 [2024-07-12 11:06:48.017347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.273 [2024-07-12 11:06:48.017364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.273 [2024-07-12 11:06:48.017370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.273 [2024-07-12 11:06:48.026707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.273 [2024-07-12 11:06:48.026725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.273 [2024-07-12 11:06:48.026732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.273 [2024-07-12 11:06:48.034715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.274 [2024-07-12 11:06:48.034732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.274 [2024-07-12 11:06:48.034738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.274 [2024-07-12 11:06:48.043773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.274 [2024-07-12 11:06:48.043790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.274 [2024-07-12 11:06:48.043796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.274 [2024-07-12 11:06:48.053380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.274 [2024-07-12 11:06:48.053397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.274 [2024-07-12 11:06:48.053403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.274 [2024-07-12 11:06:48.061779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.274 [2024-07-12 11:06:48.061796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.274 [2024-07-12 11:06:48.061802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.274 [2024-07-12 11:06:48.070687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.274 [2024-07-12 11:06:48.070704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.274 [2024-07-12 11:06:48.070713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.274 [2024-07-12 11:06:48.079195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.274 [2024-07-12 11:06:48.079212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.274 [2024-07-12 11:06:48.079218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.274 [2024-07-12 11:06:48.087939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.274 [2024-07-12 11:06:48.087956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.274 [2024-07-12 11:06:48.087962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.274 [2024-07-12 11:06:48.095887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.274 [2024-07-12 11:06:48.095904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.274 [2024-07-12 11:06:48.095910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.274 [2024-07-12 11:06:48.106548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.274 [2024-07-12 11:06:48.106565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.274 [2024-07-12 11:06:48.106571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.274 [2024-07-12 11:06:48.114908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.274 [2024-07-12 11:06:48.114924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.274 [2024-07-12 11:06:48.114931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.274 [2024-07-12 11:06:48.124233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.274 [2024-07-12 11:06:48.124249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.274 [2024-07-12 11:06:48.124256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.274 [2024-07-12 11:06:48.132279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.274 [2024-07-12 11:06:48.132296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.274 [2024-07-12 11:06:48.132302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.274 [2024-07-12 11:06:48.141812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.274 [2024-07-12 11:06:48.141829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.274 [2024-07-12 11:06:48.141835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.274 [2024-07-12 11:06:48.150841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.274 [2024-07-12 11:06:48.150860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.274 [2024-07-12 11:06:48.150866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.274 [2024-07-12 11:06:48.159345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.274 [2024-07-12 11:06:48.159362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.274 [2024-07-12 11:06:48.159368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.274 [2024-07-12 11:06:48.167326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.274 [2024-07-12 11:06:48.167342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.274 [2024-07-12 11:06:48.167348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.274 [2024-07-12 11:06:48.176984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.274 [2024-07-12 11:06:48.177001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.274 [2024-07-12 11:06:48.177007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.274 [2024-07-12 11:06:48.186180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.274 [2024-07-12 11:06:48.186196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.274 [2024-07-12 11:06:48.186202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.274 [2024-07-12 11:06:48.194290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.274 [2024-07-12 11:06:48.194307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.274 [2024-07-12 11:06:48.194313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.274 [2024-07-12 11:06:48.204353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.274 [2024-07-12 11:06:48.204369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.274 [2024-07-12 11:06:48.204376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.274 [2024-07-12 11:06:48.212707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.274 [2024-07-12 11:06:48.212723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.274 [2024-07-12 11:06:48.212729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.274 [2024-07-12 11:06:48.222400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.274 [2024-07-12 11:06:48.222416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.274 [2024-07-12 11:06:48.222422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.274 [2024-07-12 11:06:48.231961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.274 [2024-07-12 11:06:48.231977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.274 [2024-07-12 11:06:48.231983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.274 [2024-07-12 11:06:48.239260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.274 [2024-07-12 11:06:48.239277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.274 [2024-07-12 11:06:48.239283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.274 [2024-07-12 11:06:48.248639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.274 [2024-07-12 11:06:48.248656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.274 [2024-07-12 11:06:48.248662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.537 [2024-07-12 11:06:48.257591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.537 [2024-07-12 11:06:48.257608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.537 [2024-07-12 11:06:48.257614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.537 [2024-07-12 11:06:48.266332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.537 [2024-07-12 11:06:48.266348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.537 [2024-07-12 11:06:48.266355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.537 [2024-07-12 11:06:48.275866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.537 [2024-07-12 11:06:48.275882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.537 [2024-07-12 11:06:48.275888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.537 [2024-07-12 11:06:48.283705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.537 [2024-07-12 11:06:48.283721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.537 [2024-07-12 11:06:48.283727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.537 [2024-07-12 11:06:48.292606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.537 [2024-07-12 11:06:48.292623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.537 [2024-07-12 11:06:48.292629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.537 [2024-07-12 11:06:48.301632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.537 [2024-07-12 11:06:48.301648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.537 [2024-07-12 11:06:48.301657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.537 [2024-07-12 11:06:48.310316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.537 [2024-07-12 11:06:48.310333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.537 [2024-07-12 11:06:48.310339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.537 [2024-07-12 11:06:48.318680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.537 [2024-07-12 11:06:48.318697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.537 [2024-07-12 11:06:48.318703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.537 [2024-07-12 11:06:48.327484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.537 [2024-07-12 11:06:48.327501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.537 [2024-07-12 11:06:48.327507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.537 [2024-07-12 11:06:48.335974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.537 [2024-07-12 11:06:48.335991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.537 [2024-07-12 11:06:48.335997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.537 [2024-07-12 11:06:48.345060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.537 [2024-07-12 11:06:48.345076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.537 [2024-07-12 11:06:48.345082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.537 [2024-07-12 11:06:48.354113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.537 [2024-07-12 11:06:48.354133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.537 [2024-07-12 11:06:48.354139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.537 [2024-07-12 11:06:48.361741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.537 [2024-07-12 11:06:48.361758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.537 [2024-07-12 11:06:48.361764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.537 [2024-07-12 11:06:48.371805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.537 [2024-07-12 11:06:48.371822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.537 [2024-07-12 11:06:48.371828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.537 [2024-07-12 11:06:48.380930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.537 [2024-07-12 11:06:48.380945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.537 [2024-07-12 11:06:48.380951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.537 [2024-07-12 11:06:48.389159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.537 [2024-07-12 11:06:48.389175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.537 [2024-07-12 11:06:48.389181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.537 [2024-07-12 11:06:48.397810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.537 [2024-07-12 11:06:48.397826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.537 [2024-07-12 11:06:48.397832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.537 [2024-07-12 11:06:48.406725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.537 [2024-07-12 11:06:48.406741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.537 [2024-07-12 11:06:48.406747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.537 [2024-07-12 11:06:48.415264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.537 [2024-07-12 11:06:48.415280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.537 [2024-07-12 11:06:48.415286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.537 [2024-07-12 11:06:48.424267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.537 [2024-07-12 11:06:48.424284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.537 [2024-07-12 11:06:48.424290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.537 [2024-07-12 11:06:48.432778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.537 [2024-07-12 11:06:48.432795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.537 [2024-07-12 11:06:48.432802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.537 [2024-07-12 11:06:48.441050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.537 [2024-07-12 11:06:48.441067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.537 [2024-07-12 11:06:48.441073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.537 [2024-07-12 11:06:48.449750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.537 [2024-07-12 11:06:48.449767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.537 [2024-07-12 11:06:48.449776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.537 [2024-07-12 11:06:48.459046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.537 [2024-07-12 11:06:48.459062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.537 [2024-07-12 11:06:48.459068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.537 [2024-07-12 11:06:48.467918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.537 [2024-07-12 11:06:48.467934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.537 [2024-07-12 11:06:48.467940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.537 [2024-07-12 11:06:48.476829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.537 [2024-07-12 11:06:48.476845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.537 [2024-07-12 11:06:48.476851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.537 [2024-07-12 11:06:48.484876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.537 [2024-07-12 11:06:48.484892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.537 [2024-07-12 11:06:48.484898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.537 [2024-07-12 11:06:48.493921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.537 [2024-07-12 11:06:48.493937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.537 [2024-07-12 11:06:48.493943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.538 [2024-07-12 11:06:48.502934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.538 [2024-07-12 11:06:48.502950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.538 [2024-07-12 11:06:48.502956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.538 [2024-07-12 11:06:48.511450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.538 [2024-07-12 11:06:48.511467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.538 [2024-07-12 11:06:48.511473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.799 [2024-07-12 11:06:48.520063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.799 [2024-07-12 11:06:48.520079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.799 [2024-07-12 11:06:48.520085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.799 [2024-07-12 11:06:48.529307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.799 [2024-07-12 11:06:48.529325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.799 [2024-07-12 11:06:48.529331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.799 [2024-07-12 11:06:48.537093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.799 [2024-07-12 11:06:48.537109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.800 [2024-07-12 11:06:48.537115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.800 [2024-07-12 11:06:48.547507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.800 [2024-07-12 11:06:48.547523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.800 [2024-07-12 11:06:48.547530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.800 [2024-07-12 11:06:48.556005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.800 [2024-07-12 11:06:48.556021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.800 [2024-07-12 11:06:48.556027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.800 [2024-07-12 11:06:48.566150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.800 [2024-07-12 11:06:48.566167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.800 [2024-07-12 11:06:48.566173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.800 [2024-07-12 11:06:48.574827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.800 [2024-07-12 11:06:48.574843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.800 [2024-07-12 11:06:48.574850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.800 [2024-07-12 11:06:48.582839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.800 [2024-07-12 11:06:48.582855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.800 [2024-07-12 11:06:48.582862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.800 [2024-07-12 11:06:48.591647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.800 [2024-07-12 11:06:48.591664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.800 [2024-07-12 11:06:48.591670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.800 [2024-07-12 11:06:48.601894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.800 [2024-07-12 11:06:48.601911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.800 [2024-07-12 11:06:48.601917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.800 [2024-07-12 11:06:48.610040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.800 [2024-07-12 11:06:48.610057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.800 [2024-07-12 11:06:48.610063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.800 [2024-07-12 11:06:48.619138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.800 [2024-07-12 11:06:48.619154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.800 [2024-07-12 11:06:48.619160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.800 [2024-07-12 11:06:48.627337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.800 [2024-07-12 11:06:48.627353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.800 [2024-07-12 11:06:48.627359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.800 [2024-07-12 11:06:48.636828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.800 [2024-07-12 11:06:48.636844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.800 [2024-07-12 11:06:48.636851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.800 [2024-07-12 11:06:48.645037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.800 [2024-07-12 11:06:48.645054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.800 [2024-07-12 11:06:48.645060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.800 [2024-07-12 11:06:48.654652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.800 [2024-07-12 11:06:48.654669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.800 [2024-07-12 11:06:48.654675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.800 [2024-07-12 11:06:48.663481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.800 [2024-07-12 11:06:48.663496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.800 [2024-07-12 11:06:48.663502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.800 [2024-07-12 11:06:48.671887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.800 [2024-07-12 11:06:48.671903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.800 [2024-07-12 11:06:48.671909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.800 [2024-07-12 11:06:48.680748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.800 [2024-07-12 11:06:48.680764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.800 [2024-07-12 11:06:48.680776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.800 [2024-07-12 11:06:48.690321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.800 [2024-07-12 11:06:48.690338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.800 [2024-07-12 11:06:48.690344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.800 [2024-07-12 11:06:48.698352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.800 [2024-07-12 11:06:48.698369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.800 [2024-07-12 11:06:48.698375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.800 [2024-07-12 11:06:48.708246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.800 [2024-07-12 11:06:48.708263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.800 [2024-07-12 11:06:48.708269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.800 [2024-07-12 11:06:48.716193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.800 [2024-07-12 11:06:48.716210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.800 [2024-07-12 11:06:48.716216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.800 [2024-07-12 11:06:48.724741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.800 [2024-07-12 11:06:48.724757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.800 [2024-07-12 11:06:48.724763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.800 [2024-07-12 11:06:48.733948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.800 [2024-07-12 11:06:48.733965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.800 [2024-07-12 11:06:48.733971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.800 [2024-07-12 11:06:48.741709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.800 [2024-07-12 11:06:48.741725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.800 [2024-07-12 11:06:48.741731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.800 [2024-07-12 11:06:48.750712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.800 [2024-07-12 11:06:48.750727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.800 [2024-07-12 11:06:48.750734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.800 [2024-07-12 11:06:48.759293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.800 [2024-07-12 11:06:48.759313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.800 [2024-07-12 11:06:48.759319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.800 [2024-07-12 11:06:48.768942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.800 [2024-07-12 11:06:48.768960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.800 [2024-07-12 11:06:48.768966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.800 [2024-07-12 11:06:48.776414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:31.800 [2024-07-12 11:06:48.776430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.800 [2024-07-12 11:06:48.776436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.062 [2024-07-12 11:06:48.785505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.062 [2024-07-12 11:06:48.785522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.062 [2024-07-12 11:06:48.785528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.062 [2024-07-12 11:06:48.794493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.062 [2024-07-12 11:06:48.794510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.062 [2024-07-12 11:06:48.794516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.062 [2024-07-12 11:06:48.804451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.062 [2024-07-12 11:06:48.804468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.062 [2024-07-12 11:06:48.804474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.062 [2024-07-12 11:06:48.812239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.062 [2024-07-12 11:06:48.812256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.062 [2024-07-12 11:06:48.812262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.062 [2024-07-12 11:06:48.821724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.062 [2024-07-12 11:06:48.821741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.062 [2024-07-12 11:06:48.821747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.062 [2024-07-12 11:06:48.831158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.062 [2024-07-12 11:06:48.831175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.062 [2024-07-12 11:06:48.831181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.062 [2024-07-12 11:06:48.839649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.062 [2024-07-12 11:06:48.839666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.062 [2024-07-12 11:06:48.839672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.062 [2024-07-12 11:06:48.848034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.062 [2024-07-12 11:06:48.848050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.062 [2024-07-12 11:06:48.848057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.062 [2024-07-12 11:06:48.856311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.062 [2024-07-12 11:06:48.856327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.062 [2024-07-12 11:06:48.856333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.062 [2024-07-12 11:06:48.864936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.062 [2024-07-12 11:06:48.864952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.062 [2024-07-12 11:06:48.864958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.062 [2024-07-12 11:06:48.873992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.062 [2024-07-12 11:06:48.874008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.062 [2024-07-12 11:06:48.874015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.062 [2024-07-12 11:06:48.882528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.062 [2024-07-12 11:06:48.882545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.062 [2024-07-12 11:06:48.882551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.062 [2024-07-12 11:06:48.891249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.062 [2024-07-12 11:06:48.891265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.062 [2024-07-12 11:06:48.891271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.062 [2024-07-12 11:06:48.899880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.062 [2024-07-12 11:06:48.899897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.062 [2024-07-12 11:06:48.899903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.062 [2024-07-12 11:06:48.908658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.062 [2024-07-12 11:06:48.908675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.062 [2024-07-12 11:06:48.908684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.062 [2024-07-12 11:06:48.917117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.062 [2024-07-12 11:06:48.917137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.062 [2024-07-12 11:06:48.917144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.062 [2024-07-12 11:06:48.925881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.062 [2024-07-12 11:06:48.925898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.062 [2024-07-12 11:06:48.925904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.062 [2024-07-12 11:06:48.934230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.062 [2024-07-12 11:06:48.934246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.062 [2024-07-12 11:06:48.934253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.062 [2024-07-12 11:06:48.942619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.062 [2024-07-12 11:06:48.942635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.062 [2024-07-12 11:06:48.942642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.062 [2024-07-12 11:06:48.951624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.062 [2024-07-12 11:06:48.951640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.062 [2024-07-12 11:06:48.951647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.062 [2024-07-12 11:06:48.960200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.062 [2024-07-12 11:06:48.960216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.062 [2024-07-12 11:06:48.960223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.062 [2024-07-12 11:06:48.970062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.062 [2024-07-12 11:06:48.970078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.062 [2024-07-12 11:06:48.970085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.062 [2024-07-12 11:06:48.978748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.062 [2024-07-12 11:06:48.978765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.062 [2024-07-12 11:06:48.978771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.062 [2024-07-12 11:06:48.986292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.062 [2024-07-12 11:06:48.986312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.062 [2024-07-12 11:06:48.986318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.062 [2024-07-12 11:06:48.996926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.063 [2024-07-12 11:06:48.996942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.063 [2024-07-12 11:06:48.996949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.063 [2024-07-12 11:06:49.005875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.063 [2024-07-12 11:06:49.005891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.063 [2024-07-12 11:06:49.005897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.063 [2024-07-12 11:06:49.014841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.063 [2024-07-12 11:06:49.014857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.063 [2024-07-12 11:06:49.014863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.063 [2024-07-12 11:06:49.023903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.063 [2024-07-12 11:06:49.023918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.063 [2024-07-12 11:06:49.023924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.063 [2024-07-12 11:06:49.032698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.063 [2024-07-12 11:06:49.032715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.063 [2024-07-12 11:06:49.032721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.063 [2024-07-12 11:06:49.041454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.063 [2024-07-12 11:06:49.041470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.063 [2024-07-12 11:06:49.041477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.324 [2024-07-12 11:06:49.050009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.324 [2024-07-12 11:06:49.050026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.324 [2024-07-12 11:06:49.050032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.324 [2024-07-12 11:06:49.058537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.324 [2024-07-12 11:06:49.058553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.324 [2024-07-12 11:06:49.058563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.324 [2024-07-12 11:06:49.067105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.324 [2024-07-12 11:06:49.067125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.324 [2024-07-12 11:06:49.067132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.324 [2024-07-12 11:06:49.075685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.324 [2024-07-12 11:06:49.075701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.324 [2024-07-12 11:06:49.075707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.324 [2024-07-12 11:06:49.084487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.324 [2024-07-12 11:06:49.084503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.324 [2024-07-12 11:06:49.084510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.324 [2024-07-12 11:06:49.093030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.324 [2024-07-12 11:06:49.093046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.324 [2024-07-12 11:06:49.093053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.324 [2024-07-12 11:06:49.102218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.324 [2024-07-12 11:06:49.102235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.324 [2024-07-12 11:06:49.102241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.325 [2024-07-12 11:06:49.110491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.325 [2024-07-12 11:06:49.110507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.325 [2024-07-12 11:06:49.110514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.325 [2024-07-12 11:06:49.119360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.325 [2024-07-12 11:06:49.119377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.325 [2024-07-12 11:06:49.119383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.325 [2024-07-12 11:06:49.127404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.325 [2024-07-12 11:06:49.127421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.325 [2024-07-12 11:06:49.127427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.325 [2024-07-12 11:06:49.136729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.325 [2024-07-12 11:06:49.136749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.325 [2024-07-12 11:06:49.136755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.325 [2024-07-12 11:06:49.145138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.325 [2024-07-12 11:06:49.145154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.325 [2024-07-12 11:06:49.145160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.325 [2024-07-12 11:06:49.153801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.325 [2024-07-12 11:06:49.153818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.325 [2024-07-12 11:06:49.153824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.325 [2024-07-12 11:06:49.162314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.325 [2024-07-12 11:06:49.162331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.325 [2024-07-12 11:06:49.162337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.325 [2024-07-12 11:06:49.170265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.325 [2024-07-12 11:06:49.170281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.325 [2024-07-12 11:06:49.170287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.325 [2024-07-12 11:06:49.180050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.325 [2024-07-12 11:06:49.180066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.325 [2024-07-12 11:06:49.180072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.325 [2024-07-12 11:06:49.189884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.325 [2024-07-12 11:06:49.189900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.325 [2024-07-12 11:06:49.189906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.325 [2024-07-12 11:06:49.198974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.325 [2024-07-12 11:06:49.198991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.325 [2024-07-12 11:06:49.198997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.325 [2024-07-12 11:06:49.209720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.325 [2024-07-12 11:06:49.209736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.325 [2024-07-12 11:06:49.209742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.325 [2024-07-12 11:06:49.219021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.325 [2024-07-12 11:06:49.219037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.325 [2024-07-12 11:06:49.219045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.325 [2024-07-12 11:06:49.228974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.325 [2024-07-12 11:06:49.228990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.325 [2024-07-12 11:06:49.228996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.325 [2024-07-12 11:06:49.236549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.325 [2024-07-12 11:06:49.236565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.325 [2024-07-12 11:06:49.236571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.325 [2024-07-12 11:06:49.246833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.325 [2024-07-12 11:06:49.246850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.325 [2024-07-12 11:06:49.246856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.325 [2024-07-12 11:06:49.256311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.325 [2024-07-12 11:06:49.256328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.325 [2024-07-12 11:06:49.256335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.325 [2024-07-12 11:06:49.264227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.325 [2024-07-12 11:06:49.264244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.325 [2024-07-12 11:06:49.264250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.325 [2024-07-12 11:06:49.274231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.325 [2024-07-12 11:06:49.274249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.325 [2024-07-12 11:06:49.274255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.325 [2024-07-12 11:06:49.282300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.325 [2024-07-12 11:06:49.282317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.325 [2024-07-12 11:06:49.282322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.325 [2024-07-12 11:06:49.290754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.325 [2024-07-12 11:06:49.290772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.325 [2024-07-12 11:06:49.290781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.325 [2024-07-12 11:06:49.300459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.325 [2024-07-12 11:06:49.300476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.325 [2024-07-12 11:06:49.300482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.586 [2024-07-12 11:06:49.308136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.586 [2024-07-12 11:06:49.308153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.587 [2024-07-12 11:06:49.308160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.587 [2024-07-12 11:06:49.318283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.587 [2024-07-12 11:06:49.318300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.587 [2024-07-12 11:06:49.318306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.587 [2024-07-12 11:06:49.326990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.587 [2024-07-12 11:06:49.327007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.587 [2024-07-12 11:06:49.327013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.587 [2024-07-12 11:06:49.334838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.587 [2024-07-12 11:06:49.334854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.587 [2024-07-12 11:06:49.334860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.587 [2024-07-12 11:06:49.344073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.587 [2024-07-12 11:06:49.344090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.587 [2024-07-12 11:06:49.344096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.587 [2024-07-12 11:06:49.352802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.587 [2024-07-12 11:06:49.352820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.587 [2024-07-12 11:06:49.352826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.587 [2024-07-12 11:06:49.360797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.587 [2024-07-12 11:06:49.360814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.587 [2024-07-12 11:06:49.360820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.587 [2024-07-12 11:06:49.370490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.587 [2024-07-12 11:06:49.370510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.587 [2024-07-12 11:06:49.370516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.587 [2024-07-12 11:06:49.378813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.587 [2024-07-12 11:06:49.378830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.587 [2024-07-12 11:06:49.378836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.587 [2024-07-12 11:06:49.387465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.587 [2024-07-12 11:06:49.387482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.587 [2024-07-12 11:06:49.387489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.587 [2024-07-12 11:06:49.396288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.587 [2024-07-12 11:06:49.396305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.587 [2024-07-12 11:06:49.396312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.587 [2024-07-12 11:06:49.405828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.587 [2024-07-12 11:06:49.405844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.587 [2024-07-12 11:06:49.405850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.587 [2024-07-12 11:06:49.414277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.587 [2024-07-12 11:06:49.414294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.587 [2024-07-12 11:06:49.414300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.587 [2024-07-12 11:06:49.422689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.587 [2024-07-12 11:06:49.422705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.587 [2024-07-12 11:06:49.422711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.587 [2024-07-12 11:06:49.431430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.587 [2024-07-12 11:06:49.431447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.587 [2024-07-12 11:06:49.431453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.587 [2024-07-12 11:06:49.440250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.587 [2024-07-12 11:06:49.440266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.587 [2024-07-12 11:06:49.440273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.587 [2024-07-12 11:06:49.449720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.587 [2024-07-12 11:06:49.449737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.587 [2024-07-12 11:06:49.449743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.587 [2024-07-12 11:06:49.457576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.587 [2024-07-12 11:06:49.457592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.587 [2024-07-12 11:06:49.457599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.587 [2024-07-12 11:06:49.467111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.587 [2024-07-12 11:06:49.467131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.587 [2024-07-12 11:06:49.467138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.587 [2024-07-12 11:06:49.476375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.587 [2024-07-12 11:06:49.476392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.587 [2024-07-12 11:06:49.476399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.587 [2024-07-12 11:06:49.485321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.587 [2024-07-12 11:06:49.485338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.587 [2024-07-12 11:06:49.485344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.587 [2024-07-12 11:06:49.493467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.587 [2024-07-12 11:06:49.493484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.587 [2024-07-12 11:06:49.493490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.587 [2024-07-12 11:06:49.502217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.587 [2024-07-12 11:06:49.502234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.587 [2024-07-12 11:06:49.502240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.587 [2024-07-12 11:06:49.510795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.587 [2024-07-12 11:06:49.510813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.587 [2024-07-12 11:06:49.510819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.587 [2024-07-12 11:06:49.519784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.587 [2024-07-12 11:06:49.519800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.587 [2024-07-12 11:06:49.519809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.587 [2024-07-12 11:06:49.528740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.587 [2024-07-12 11:06:49.528758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.587 [2024-07-12 11:06:49.528764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.587 [2024-07-12 11:06:49.538729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.587 [2024-07-12 11:06:49.538746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.587 [2024-07-12 11:06:49.538752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.587 [2024-07-12 11:06:49.546526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.587 [2024-07-12 11:06:49.546543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.587 [2024-07-12 11:06:49.546550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.587 [2024-07-12 11:06:49.556514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.587 [2024-07-12 11:06:49.556530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.587 [2024-07-12 11:06:49.556537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.587 [2024-07-12 11:06:49.565084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.588 [2024-07-12 11:06:49.565101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.588 [2024-07-12 11:06:49.565107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.849 [2024-07-12 11:06:49.573713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.849 [2024-07-12 11:06:49.573729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.849 [2024-07-12 11:06:49.573735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.849 [2024-07-12 11:06:49.582280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.849 [2024-07-12 11:06:49.582296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.849 [2024-07-12 11:06:49.582302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.849 [2024-07-12 11:06:49.592593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.849 [2024-07-12 11:06:49.592610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.849 [2024-07-12 11:06:49.592616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.849 [2024-07-12 11:06:49.600716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.849 [2024-07-12 11:06:49.600732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.849 [2024-07-12 11:06:49.600738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.849 [2024-07-12 11:06:49.611546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.849 [2024-07-12 11:06:49.611562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.849 [2024-07-12 11:06:49.611568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.849 [2024-07-12 11:06:49.620022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.849 [2024-07-12 11:06:49.620038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.849 [2024-07-12 11:06:49.620044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.849 [2024-07-12 11:06:49.628327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.849 [2024-07-12 11:06:49.628344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.849 [2024-07-12 11:06:49.628350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.849 [2024-07-12 11:06:49.637709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.849 [2024-07-12 11:06:49.637727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.849 [2024-07-12 11:06:49.637733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.849 [2024-07-12 11:06:49.646211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.849 [2024-07-12 11:06:49.646227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.849 [2024-07-12 11:06:49.646233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.849 [2024-07-12 11:06:49.654782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.849 [2024-07-12 11:06:49.654798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.849 [2024-07-12 11:06:49.654804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.849 [2024-07-12 11:06:49.663798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.849 [2024-07-12 11:06:49.663815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.849 [2024-07-12 11:06:49.663822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.849 [2024-07-12 11:06:49.672738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.849 [2024-07-12 11:06:49.672755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.849 [2024-07-12 11:06:49.672764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.849 [2024-07-12 11:06:49.680971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.849 [2024-07-12 11:06:49.680989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.849 [2024-07-12 11:06:49.680995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.849 [2024-07-12 11:06:49.691048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.849 [2024-07-12 11:06:49.691065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.849 [2024-07-12 11:06:49.691071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.849 [2024-07-12 11:06:49.698532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.849 [2024-07-12 11:06:49.698551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.849 [2024-07-12 11:06:49.698557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.849 [2024-07-12 11:06:49.708424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.849 [2024-07-12 11:06:49.708440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.849 [2024-07-12 11:06:49.708446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.849 [2024-07-12 11:06:49.717213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.849 [2024-07-12 11:06:49.717229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.849 [2024-07-12 11:06:49.717235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.849 [2024-07-12 11:06:49.726480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.849 [2024-07-12 11:06:49.726498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.849 [2024-07-12 11:06:49.726504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.849 [2024-07-12 11:06:49.737251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.849 [2024-07-12 11:06:49.737268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.849 [2024-07-12 11:06:49.737274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.849 [2024-07-12 11:06:49.744987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.849 [2024-07-12 11:06:49.745004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.849 [2024-07-12 11:06:49.745010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.849 [2024-07-12 11:06:49.755106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.849 [2024-07-12 11:06:49.755131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.849 [2024-07-12 11:06:49.755137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.849 [2024-07-12 11:06:49.764051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.849 [2024-07-12 11:06:49.764067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.849 [2024-07-12 11:06:49.764073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.849 [2024-07-12 11:06:49.773538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.849 [2024-07-12 11:06:49.773555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.849 [2024-07-12 11:06:49.773561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.849 [2024-07-12 11:06:49.782458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.849 [2024-07-12 11:06:49.782475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.849 [2024-07-12 11:06:49.782481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.849 [2024-07-12 11:06:49.791603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.849 [2024-07-12 11:06:49.791620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.850 [2024-07-12 11:06:49.791626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.850 [2024-07-12 11:06:49.800826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.850 [2024-07-12 11:06:49.800843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.850 [2024-07-12 11:06:49.800849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.850 [2024-07-12 11:06:49.808691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.850 [2024-07-12 11:06:49.808708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.850 [2024-07-12 11:06:49.808714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.850 [2024-07-12 11:06:49.817396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xba28e0) 00:28:32.850 [2024-07-12 11:06:49.817413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.850 [2024-07-12 11:06:49.817419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.111 00:28:33.111 Latency(us) 00:28:33.111 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:33.111 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:33.111 nvme0n1 : 2.04 28116.54 109.83 0.00 0.00 4457.09 2075.31 46530.56 00:28:33.111 =================================================================================================================== 00:28:33.111 Total : 28116.54 109.83 0.00 0.00 4457.09 2075.31 46530.56 00:28:33.111 0 00:28:33.111 11:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:33.111 11:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:33.111 11:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:33.111 | .driver_specific 00:28:33.111 | .nvme_error 00:28:33.111 | .status_code 00:28:33.111 | .command_transient_transport_error' 00:28:33.111 11:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:33.111 11:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 225 > 0 )) 00:28:33.111 11:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2269897 00:28:33.111 11:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2269897 ']' 00:28:33.111 11:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2269897 00:28:33.111 11:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:33.111 11:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:33.111 11:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2269897 00:28:33.372 11:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:33.372 11:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:33.372 11:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2269897' 00:28:33.372 killing process with pid 2269897 00:28:33.372 11:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2269897 00:28:33.372 Received shutdown signal, test time was about 2.000000 seconds 00:28:33.372 00:28:33.372 Latency(us) 00:28:33.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:33.372 =================================================================================================================== 00:28:33.372 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:33.372 11:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2269897 00:28:33.372 11:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:33.372 11:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:33.372 11:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:33.372 11:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:33.372 11:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:33.372 11:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2270648 00:28:33.372 11:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2270648 /var/tmp/bperf.sock 00:28:33.372 11:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2270648 ']' 00:28:33.372 11:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:33.372 11:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:33.372 11:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:33.372 11:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:33.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:33.372 11:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:33.372 11:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:33.372 [2024-07-12 11:06:50.276059] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:33.372 [2024-07-12 11:06:50.276114] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2270648 ] 00:28:33.372 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:33.372 Zero copy mechanism will not be used. 00:28:33.372 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.372 [2024-07-12 11:06:50.351931] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.633 [2024-07-12 11:06:50.405139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:34.204 11:06:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:34.204 11:06:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:34.204 11:06:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:34.204 11:06:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:34.465 11:06:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:34.465 11:06:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.465 11:06:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:34.465 11:06:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.465 11:06:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:34.465 11:06:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:34.726 nvme0n1 00:28:34.726 11:06:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:34.726 11:06:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.726 11:06:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:34.726 11:06:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.726 11:06:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:34.726 11:06:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:34.726 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:34.726 Zero copy mechanism will not be used. 00:28:34.726 Running I/O for 2 seconds... 00:28:34.727 [2024-07-12 11:06:51.642689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:34.727 [2024-07-12 11:06:51.642723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.727 [2024-07-12 11:06:51.642733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:34.727 [2024-07-12 11:06:51.653679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:34.727 [2024-07-12 11:06:51.653703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.727 [2024-07-12 11:06:51.653710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:34.727 [2024-07-12 11:06:51.664164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:34.727 [2024-07-12 11:06:51.664183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.727 [2024-07-12 11:06:51.664190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:34.727 [2024-07-12 11:06:51.676422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:34.727 [2024-07-12 11:06:51.676440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.727 [2024-07-12 11:06:51.676447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.727 [2024-07-12 11:06:51.687864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:34.727 [2024-07-12 11:06:51.687882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.727 [2024-07-12 11:06:51.687888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:34.727 [2024-07-12 11:06:51.698348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:34.727 [2024-07-12 11:06:51.698365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.727 [2024-07-12 11:06:51.698371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:34.727 [2024-07-12 11:06:51.708294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:34.727 [2024-07-12 11:06:51.708311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.727 [2024-07-12 11:06:51.708317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:34.988 [2024-07-12 11:06:51.718845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:34.988 [2024-07-12 11:06:51.718864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.988 [2024-07-12 11:06:51.718870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.988 [2024-07-12 11:06:51.728500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:34.988 [2024-07-12 11:06:51.728517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.988 [2024-07-12 11:06:51.728524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:34.988 [2024-07-12 11:06:51.740857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:34.988 [2024-07-12 11:06:51.740874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.988 [2024-07-12 11:06:51.740880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:34.988 [2024-07-12 11:06:51.753243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:34.988 [2024-07-12 11:06:51.753263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.988 [2024-07-12 11:06:51.753269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:34.988 [2024-07-12 11:06:51.764000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:34.988 [2024-07-12 11:06:51.764017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.988 [2024-07-12 11:06:51.764024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.988 [2024-07-12 11:06:51.775671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:34.988 [2024-07-12 11:06:51.775688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.988 [2024-07-12 11:06:51.775694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:34.988 [2024-07-12 11:06:51.787283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:34.988 [2024-07-12 11:06:51.787300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.988 [2024-07-12 11:06:51.787306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:34.988 [2024-07-12 11:06:51.799509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:34.988 [2024-07-12 11:06:51.799526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.988 [2024-07-12 11:06:51.799532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:34.988 [2024-07-12 11:06:51.811219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:34.988 [2024-07-12 11:06:51.811236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.988 [2024-07-12 11:06:51.811242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.988 [2024-07-12 11:06:51.823268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:34.988 [2024-07-12 11:06:51.823285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.988 [2024-07-12 11:06:51.823291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:34.988 [2024-07-12 11:06:51.835621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:34.988 [2024-07-12 11:06:51.835638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.988 [2024-07-12 11:06:51.835644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:34.988 [2024-07-12 11:06:51.846024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:34.988 [2024-07-12 11:06:51.846041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.988 [2024-07-12 11:06:51.846047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:34.988 [2024-07-12 11:06:51.857944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:34.988 [2024-07-12 11:06:51.857960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.988 [2024-07-12 11:06:51.857966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.988 [2024-07-12 11:06:51.870052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:34.988 [2024-07-12 11:06:51.870068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.988 [2024-07-12 11:06:51.870074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:34.988 [2024-07-12 11:06:51.881861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:34.988 [2024-07-12 11:06:51.881878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.988 [2024-07-12 11:06:51.881884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:34.989 [2024-07-12 11:06:51.895217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:34.989 [2024-07-12 11:06:51.895234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.989 [2024-07-12 11:06:51.895240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:34.989 [2024-07-12 11:06:51.907525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:34.989 [2024-07-12 11:06:51.907542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.989 [2024-07-12 11:06:51.907548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.989 [2024-07-12 11:06:51.919475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:34.989 [2024-07-12 11:06:51.919492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.989 [2024-07-12 11:06:51.919498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:34.989 [2024-07-12 11:06:51.931132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:34.989 [2024-07-12 11:06:51.931149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.989 [2024-07-12 11:06:51.931155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:34.989 [2024-07-12 11:06:51.942372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:34.989 [2024-07-12 11:06:51.942388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.989 [2024-07-12 11:06:51.942394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:34.989 [2024-07-12 11:06:51.954688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:34.989 [2024-07-12 11:06:51.954705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.989 [2024-07-12 11:06:51.954714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.989 [2024-07-12 11:06:51.967117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:34.989 [2024-07-12 11:06:51.967138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.989 [2024-07-12 11:06:51.967144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:35.250 [2024-07-12 11:06:51.977641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.250 [2024-07-12 11:06:51.977659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.250 [2024-07-12 11:06:51.977665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:35.250 [2024-07-12 11:06:51.988284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.250 [2024-07-12 11:06:51.988300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.250 [2024-07-12 11:06:51.988307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:35.250 [2024-07-12 11:06:52.000145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.250 [2024-07-12 11:06:52.000161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.250 [2024-07-12 11:06:52.000167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.250 [2024-07-12 11:06:52.013883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.250 [2024-07-12 11:06:52.013900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.250 [2024-07-12 11:06:52.013906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:35.250 [2024-07-12 11:06:52.027901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.250 [2024-07-12 11:06:52.027918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.250 [2024-07-12 11:06:52.027924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:35.250 [2024-07-12 11:06:52.039200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.250 [2024-07-12 11:06:52.039217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.250 [2024-07-12 11:06:52.039223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:35.250 [2024-07-12 11:06:52.050529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.250 [2024-07-12 11:06:52.050546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.250 [2024-07-12 11:06:52.050552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.250 [2024-07-12 11:06:52.062762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.250 [2024-07-12 11:06:52.062783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.250 [2024-07-12 11:06:52.062789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:35.250 [2024-07-12 11:06:52.073495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.250 [2024-07-12 11:06:52.073512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.250 [2024-07-12 11:06:52.073519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:35.250 [2024-07-12 11:06:52.083258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.250 [2024-07-12 11:06:52.083275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.250 [2024-07-12 11:06:52.083281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:35.250 [2024-07-12 11:06:52.094029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.250 [2024-07-12 11:06:52.094046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.250 [2024-07-12 11:06:52.094053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.250 [2024-07-12 11:06:52.104144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.250 [2024-07-12 11:06:52.104162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.250 [2024-07-12 11:06:52.104168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:35.250 [2024-07-12 11:06:52.114747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.250 [2024-07-12 11:06:52.114765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.250 [2024-07-12 11:06:52.114771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:35.250 [2024-07-12 11:06:52.123513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.250 [2024-07-12 11:06:52.123530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.250 [2024-07-12 11:06:52.123536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:35.250 [2024-07-12 11:06:52.131607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.250 [2024-07-12 11:06:52.131623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.250 [2024-07-12 11:06:52.131629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.250 [2024-07-12 11:06:52.138571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.250 [2024-07-12 11:06:52.138588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.250 [2024-07-12 11:06:52.138600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:35.250 [2024-07-12 11:06:52.145781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.250 [2024-07-12 11:06:52.145797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.250 [2024-07-12 11:06:52.145803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:35.250 [2024-07-12 11:06:52.153930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.250 [2024-07-12 11:06:52.153946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.250 [2024-07-12 11:06:52.153952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:35.250 [2024-07-12 11:06:52.160889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.250 [2024-07-12 11:06:52.160905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.250 [2024-07-12 11:06:52.160911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.250 [2024-07-12 11:06:52.167608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.250 [2024-07-12 11:06:52.167625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.250 [2024-07-12 11:06:52.167631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:35.250 [2024-07-12 11:06:52.173890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.250 [2024-07-12 11:06:52.173906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.250 [2024-07-12 11:06:52.173912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:35.250 [2024-07-12 11:06:52.180172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.250 [2024-07-12 11:06:52.180188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.250 [2024-07-12 11:06:52.180194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:35.250 [2024-07-12 11:06:52.186065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.250 [2024-07-12 11:06:52.186082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.250 [2024-07-12 11:06:52.186087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.250 [2024-07-12 11:06:52.192796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.250 [2024-07-12 11:06:52.192813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.251 [2024-07-12 11:06:52.192819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:35.251 [2024-07-12 11:06:52.198737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.251 [2024-07-12 11:06:52.198757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.251 [2024-07-12 11:06:52.198763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:35.251 [2024-07-12 11:06:52.205328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.251 [2024-07-12 11:06:52.205345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.251 [2024-07-12 11:06:52.205351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:35.251 [2024-07-12 11:06:52.213092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.251 [2024-07-12 11:06:52.213109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.251 [2024-07-12 11:06:52.213115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.251 [2024-07-12 11:06:52.222149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.251 [2024-07-12 11:06:52.222166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.251 [2024-07-12 11:06:52.222173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:35.251 [2024-07-12 11:06:52.230831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.251 [2024-07-12 11:06:52.230848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.251 [2024-07-12 11:06:52.230854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:35.512 [2024-07-12 11:06:52.240501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.512 [2024-07-12 11:06:52.240520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.512 [2024-07-12 11:06:52.240526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:35.512 [2024-07-12 11:06:52.249349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.512 [2024-07-12 11:06:52.249367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.512 [2024-07-12 11:06:52.249373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.512 [2024-07-12 11:06:52.258296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.512 [2024-07-12 11:06:52.258313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.512 [2024-07-12 11:06:52.258319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:35.512 [2024-07-12 11:06:52.268115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.512 [2024-07-12 11:06:52.268138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.512 [2024-07-12 11:06:52.268144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:35.512 [2024-07-12 11:06:52.277459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.512 [2024-07-12 11:06:52.277476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.512 [2024-07-12 11:06:52.277482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:35.513 [2024-07-12 11:06:52.285796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.513 [2024-07-12 11:06:52.285812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.513 [2024-07-12 11:06:52.285818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.513 [2024-07-12 11:06:52.294447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.513 [2024-07-12 11:06:52.294464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.513 [2024-07-12 11:06:52.294471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:35.513 [2024-07-12 11:06:52.302537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.513 [2024-07-12 11:06:52.302554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.513 [2024-07-12 11:06:52.302561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:35.513 [2024-07-12 11:06:52.310222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.513 [2024-07-12 11:06:52.310239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.513 [2024-07-12 11:06:52.310245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:35.513 [2024-07-12 11:06:52.317464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.513 [2024-07-12 11:06:52.317481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.513 [2024-07-12 11:06:52.317487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.513 [2024-07-12 11:06:52.324070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.513 [2024-07-12 11:06:52.324087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.513 [2024-07-12 11:06:52.324093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:35.513 [2024-07-12 11:06:52.330627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.513 [2024-07-12 11:06:52.330643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.513 [2024-07-12 11:06:52.330649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:35.513 [2024-07-12 11:06:52.336458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.513 [2024-07-12 11:06:52.336474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.513 [2024-07-12 11:06:52.336484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:35.513 [2024-07-12 11:06:52.342518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.513 [2024-07-12 11:06:52.342535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.513 [2024-07-12 11:06:52.342541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.513 [2024-07-12 11:06:52.348257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.513 [2024-07-12 11:06:52.348275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.513 [2024-07-12 11:06:52.348280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:35.513 [2024-07-12 11:06:52.354554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.513 [2024-07-12 11:06:52.354570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.513 [2024-07-12 11:06:52.354576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:35.513 [2024-07-12 11:06:52.361152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.513 [2024-07-12 11:06:52.361170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.513 [2024-07-12 11:06:52.361176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:35.513 [2024-07-12 11:06:52.367227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.513 [2024-07-12 11:06:52.367244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.513 [2024-07-12 11:06:52.367250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.513 [2024-07-12 11:06:52.373191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.513 [2024-07-12 11:06:52.373208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.513 [2024-07-12 11:06:52.373214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:35.513 [2024-07-12 11:06:52.378811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.513 [2024-07-12 11:06:52.378828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.513 [2024-07-12 11:06:52.378834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:35.513 [2024-07-12 11:06:52.384451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.513 [2024-07-12 11:06:52.384468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.513 [2024-07-12 11:06:52.384474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:35.513 [2024-07-12 11:06:52.389860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.513 [2024-07-12 11:06:52.389881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.513 [2024-07-12 11:06:52.389887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.513 [2024-07-12 11:06:52.395227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.513 [2024-07-12 11:06:52.395244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.513 [2024-07-12 11:06:52.395250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:35.513 [2024-07-12 11:06:52.400472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.513 [2024-07-12 11:06:52.400489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.513 [2024-07-12 11:06:52.400495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:35.513 [2024-07-12 11:06:52.405594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.514 [2024-07-12 11:06:52.405610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.514 [2024-07-12 11:06:52.405616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:35.514 [2024-07-12 11:06:52.410743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.514 [2024-07-12 11:06:52.410758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.514 [2024-07-12 11:06:52.410764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.514 [2024-07-12 11:06:52.415932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.514 [2024-07-12 11:06:52.415949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.514 [2024-07-12 11:06:52.415955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:35.514 [2024-07-12 11:06:52.421297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.514 [2024-07-12 11:06:52.421314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.514 [2024-07-12 11:06:52.421320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:35.514 [2024-07-12 11:06:52.426431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.514 [2024-07-12 11:06:52.426448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.514 [2024-07-12 11:06:52.426454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:35.514 [2024-07-12 11:06:52.431815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.514 [2024-07-12 11:06:52.431833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.514 [2024-07-12 11:06:52.431839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.514 [2024-07-12 11:06:52.437460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.514 [2024-07-12 11:06:52.437478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.514 [2024-07-12 11:06:52.437484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:35.514 [2024-07-12 11:06:52.442824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.514 [2024-07-12 11:06:52.442841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.514 [2024-07-12 11:06:52.442846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:35.514 [2024-07-12 11:06:52.448717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.514 [2024-07-12 11:06:52.448735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.514 [2024-07-12 11:06:52.448741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:35.514 [2024-07-12 11:06:52.454373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.514 [2024-07-12 11:06:52.454391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.514 [2024-07-12 11:06:52.454397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.514 [2024-07-12 11:06:52.459682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.514 [2024-07-12 11:06:52.459700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.514 [2024-07-12 11:06:52.459706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:35.514 [2024-07-12 11:06:52.464957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.514 [2024-07-12 11:06:52.464975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.514 [2024-07-12 11:06:52.464981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:35.514 [2024-07-12 11:06:52.471019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.514 [2024-07-12 11:06:52.471036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.514 [2024-07-12 11:06:52.471043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:35.514 [2024-07-12 11:06:52.477119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.514 [2024-07-12 11:06:52.477141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.514 [2024-07-12 11:06:52.477147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.514 [2024-07-12 11:06:52.483419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.514 [2024-07-12 11:06:52.483435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.514 [2024-07-12 11:06:52.483445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:35.514 [2024-07-12 11:06:52.490681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.514 [2024-07-12 11:06:52.490699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.514 [2024-07-12 11:06:52.490705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:35.776 [2024-07-12 11:06:52.496967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.776 [2024-07-12 11:06:52.496985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.776 [2024-07-12 11:06:52.496991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:35.776 [2024-07-12 11:06:52.503099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.776 [2024-07-12 11:06:52.503116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.776 [2024-07-12 11:06:52.503127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.776 [2024-07-12 11:06:52.509068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.776 [2024-07-12 11:06:52.509085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.776 [2024-07-12 11:06:52.509091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:35.776 [2024-07-12 11:06:52.514951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.776 [2024-07-12 11:06:52.514969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.776 [2024-07-12 11:06:52.514976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:35.776 [2024-07-12 11:06:52.520708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.776 [2024-07-12 11:06:52.520725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.776 [2024-07-12 11:06:52.520732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:35.776 [2024-07-12 11:06:52.526677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.776 [2024-07-12 11:06:52.526695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.776 [2024-07-12 11:06:52.526700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.776 [2024-07-12 11:06:52.532352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.776 [2024-07-12 11:06:52.532369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.776 [2024-07-12 11:06:52.532375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:35.776 [2024-07-12 11:06:52.537753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.776 [2024-07-12 11:06:52.537771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.776 [2024-07-12 11:06:52.537777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:35.776 [2024-07-12 11:06:52.543531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.776 [2024-07-12 11:06:52.543549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.776 [2024-07-12 11:06:52.543555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:35.776 [2024-07-12 11:06:52.549685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.777 [2024-07-12 11:06:52.549702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.777 [2024-07-12 11:06:52.549708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.777 [2024-07-12 11:06:52.555260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.777 [2024-07-12 11:06:52.555277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.777 [2024-07-12 11:06:52.555283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:35.777 [2024-07-12 11:06:52.560844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.777 [2024-07-12 11:06:52.560862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.777 [2024-07-12 11:06:52.560867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:35.777 [2024-07-12 11:06:52.566436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.777 [2024-07-12 11:06:52.566453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.777 [2024-07-12 11:06:52.566459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:35.777 [2024-07-12 11:06:52.571881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.777 [2024-07-12 11:06:52.571899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.777 [2024-07-12 11:06:52.571905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.777 [2024-07-12 11:06:52.577660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.777 [2024-07-12 11:06:52.577678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.777 [2024-07-12 11:06:52.577684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:35.777 [2024-07-12 11:06:52.583399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.777 [2024-07-12 11:06:52.583416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.777 [2024-07-12 11:06:52.583426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:35.777 [2024-07-12 11:06:52.589728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.777 [2024-07-12 11:06:52.589746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.777 [2024-07-12 11:06:52.589752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:35.777 [2024-07-12 11:06:52.595708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.777 [2024-07-12 11:06:52.595725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.777 [2024-07-12 11:06:52.595731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.777 [2024-07-12 11:06:52.602319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.777 [2024-07-12 11:06:52.602337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.777 [2024-07-12 11:06:52.602343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:35.777 [2024-07-12 11:06:52.608152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.777 [2024-07-12 11:06:52.608169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.777 [2024-07-12 11:06:52.608175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:35.777 [2024-07-12 11:06:52.613600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.777 [2024-07-12 11:06:52.613617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.777 [2024-07-12 11:06:52.613624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:35.777 [2024-07-12 11:06:52.619341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.777 [2024-07-12 11:06:52.619358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.777 [2024-07-12 11:06:52.619364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.777 [2024-07-12 11:06:52.625657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.777 [2024-07-12 11:06:52.625674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.777 [2024-07-12 11:06:52.625681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:35.777 [2024-07-12 11:06:52.631596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.777 [2024-07-12 11:06:52.631614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.777 [2024-07-12 11:06:52.631620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:35.777 [2024-07-12 11:06:52.637465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.777 [2024-07-12 11:06:52.637486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.777 [2024-07-12 11:06:52.637492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:35.777 [2024-07-12 11:06:52.644397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.777 [2024-07-12 11:06:52.644415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.777 [2024-07-12 11:06:52.644421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.777 [2024-07-12 11:06:52.652501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.777 [2024-07-12 11:06:52.652518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.777 [2024-07-12 11:06:52.652524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:35.777 [2024-07-12 11:06:52.660350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.777 [2024-07-12 11:06:52.660367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.777 [2024-07-12 11:06:52.660373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:35.777 [2024-07-12 11:06:52.668256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.777 [2024-07-12 11:06:52.668273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.777 [2024-07-12 11:06:52.668279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:35.777 [2024-07-12 11:06:52.676236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.777 [2024-07-12 11:06:52.676254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.777 [2024-07-12 11:06:52.676260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.777 [2024-07-12 11:06:52.684945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.777 [2024-07-12 11:06:52.684963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.777 [2024-07-12 11:06:52.684969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:35.777 [2024-07-12 11:06:52.693094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.777 [2024-07-12 11:06:52.693112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.777 [2024-07-12 11:06:52.693118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:35.777 [2024-07-12 11:06:52.699999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.777 [2024-07-12 11:06:52.700016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.777 [2024-07-12 11:06:52.700022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:35.777 [2024-07-12 11:06:52.706675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.777 [2024-07-12 11:06:52.706693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.777 [2024-07-12 11:06:52.706699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.777 [2024-07-12 11:06:52.713033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.777 [2024-07-12 11:06:52.713050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.777 [2024-07-12 11:06:52.713056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:35.777 [2024-07-12 11:06:52.719156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.777 [2024-07-12 11:06:52.719173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.777 [2024-07-12 11:06:52.719179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:35.777 [2024-07-12 11:06:52.725603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.777 [2024-07-12 11:06:52.725620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.777 [2024-07-12 11:06:52.725627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:35.777 [2024-07-12 11:06:52.732295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.777 [2024-07-12 11:06:52.732313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.777 [2024-07-12 11:06:52.732319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.778 [2024-07-12 11:06:52.738895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.778 [2024-07-12 11:06:52.738913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.778 [2024-07-12 11:06:52.738919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:35.778 [2024-07-12 11:06:52.745288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.778 [2024-07-12 11:06:52.745305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.778 [2024-07-12 11:06:52.745311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:35.778 [2024-07-12 11:06:52.751967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.778 [2024-07-12 11:06:52.751984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.778 [2024-07-12 11:06:52.751990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:35.778 [2024-07-12 11:06:52.758240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:35.778 [2024-07-12 11:06:52.758259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.778 [2024-07-12 11:06:52.758269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.040 [2024-07-12 11:06:52.764238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.040 [2024-07-12 11:06:52.764256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.040 [2024-07-12 11:06:52.764262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:36.040 [2024-07-12 11:06:52.770104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.040 [2024-07-12 11:06:52.770126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.040 [2024-07-12 11:06:52.770132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:36.040 [2024-07-12 11:06:52.775858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.040 [2024-07-12 11:06:52.775875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.040 [2024-07-12 11:06:52.775882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:36.040 [2024-07-12 11:06:52.781718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.040 [2024-07-12 11:06:52.781737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.040 [2024-07-12 11:06:52.781743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.040 [2024-07-12 11:06:52.786951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.040 [2024-07-12 11:06:52.786969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.040 [2024-07-12 11:06:52.786976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:36.040 [2024-07-12 11:06:52.792283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.040 [2024-07-12 11:06:52.792301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.040 [2024-07-12 11:06:52.792307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:36.040 [2024-07-12 11:06:52.797709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.040 [2024-07-12 11:06:52.797728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.040 [2024-07-12 11:06:52.797734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:36.040 [2024-07-12 11:06:52.803014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.040 [2024-07-12 11:06:52.803032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.040 [2024-07-12 11:06:52.803038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.040 [2024-07-12 11:06:52.808538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.040 [2024-07-12 11:06:52.808560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.040 [2024-07-12 11:06:52.808566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:36.040 [2024-07-12 11:06:52.814531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.040 [2024-07-12 11:06:52.814549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.040 [2024-07-12 11:06:52.814555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:36.040 [2024-07-12 11:06:52.820456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.040 [2024-07-12 11:06:52.820475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.040 [2024-07-12 11:06:52.820480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:36.040 [2024-07-12 11:06:52.826217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.040 [2024-07-12 11:06:52.826235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.040 [2024-07-12 11:06:52.826241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.040 [2024-07-12 11:06:52.832012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.040 [2024-07-12 11:06:52.832029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.040 [2024-07-12 11:06:52.832035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:36.040 [2024-07-12 11:06:52.837649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.040 [2024-07-12 11:06:52.837667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.040 [2024-07-12 11:06:52.837672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:36.040 [2024-07-12 11:06:52.843368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.040 [2024-07-12 11:06:52.843386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.040 [2024-07-12 11:06:52.843392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:36.040 [2024-07-12 11:06:52.849185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.040 [2024-07-12 11:06:52.849202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.040 [2024-07-12 11:06:52.849208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.040 [2024-07-12 11:06:52.854600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.040 [2024-07-12 11:06:52.854618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.040 [2024-07-12 11:06:52.854624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:36.040 [2024-07-12 11:06:52.860863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.040 [2024-07-12 11:06:52.860881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.040 [2024-07-12 11:06:52.860887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:36.040 [2024-07-12 11:06:52.866776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.040 [2024-07-12 11:06:52.866793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.040 [2024-07-12 11:06:52.866799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:36.040 [2024-07-12 11:06:52.873298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.040 [2024-07-12 11:06:52.873316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.040 [2024-07-12 11:06:52.873322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.040 [2024-07-12 11:06:52.879481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.040 [2024-07-12 11:06:52.879500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.040 [2024-07-12 11:06:52.879506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:36.040 [2024-07-12 11:06:52.885481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.040 [2024-07-12 11:06:52.885498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.040 [2024-07-12 11:06:52.885504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:36.040 [2024-07-12 11:06:52.890969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.040 [2024-07-12 11:06:52.890987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.041 [2024-07-12 11:06:52.890993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:36.041 [2024-07-12 11:06:52.896426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.041 [2024-07-12 11:06:52.896444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.041 [2024-07-12 11:06:52.896450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.041 [2024-07-12 11:06:52.901860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.041 [2024-07-12 11:06:52.901877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.041 [2024-07-12 11:06:52.901883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:36.041 [2024-07-12 11:06:52.907405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.041 [2024-07-12 11:06:52.907426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.041 [2024-07-12 11:06:52.907432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:36.041 [2024-07-12 11:06:52.914979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.041 [2024-07-12 11:06:52.914997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.041 [2024-07-12 11:06:52.915003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:36.041 [2024-07-12 11:06:52.923801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.041 [2024-07-12 11:06:52.923819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.041 [2024-07-12 11:06:52.923825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.041 [2024-07-12 11:06:52.932319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.041 [2024-07-12 11:06:52.932337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.041 [2024-07-12 11:06:52.932343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:36.041 [2024-07-12 11:06:52.938899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.041 [2024-07-12 11:06:52.938917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.041 [2024-07-12 11:06:52.938924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:36.041 [2024-07-12 11:06:52.946572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.041 [2024-07-12 11:06:52.946589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.041 [2024-07-12 11:06:52.946595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:36.041 [2024-07-12 11:06:52.953763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.041 [2024-07-12 11:06:52.953781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.041 [2024-07-12 11:06:52.953787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.041 [2024-07-12 11:06:52.960342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.041 [2024-07-12 11:06:52.960360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.041 [2024-07-12 11:06:52.960365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:36.041 [2024-07-12 11:06:52.966890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.041 [2024-07-12 11:06:52.966908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.041 [2024-07-12 11:06:52.966914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:36.041 [2024-07-12 11:06:52.973134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.041 [2024-07-12 11:06:52.973151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.041 [2024-07-12 11:06:52.973157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:36.041 [2024-07-12 11:06:52.979191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.041 [2024-07-12 11:06:52.979208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.041 [2024-07-12 11:06:52.979214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.041 [2024-07-12 11:06:52.985162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.041 [2024-07-12 11:06:52.985179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.041 [2024-07-12 11:06:52.985186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:36.041 [2024-07-12 11:06:52.991356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.041 [2024-07-12 11:06:52.991374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.041 [2024-07-12 11:06:52.991380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:36.041 [2024-07-12 11:06:52.998367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.041 [2024-07-12 11:06:52.998385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.041 [2024-07-12 11:06:52.998391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:36.041 [2024-07-12 11:06:53.005817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.041 [2024-07-12 11:06:53.005834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.041 [2024-07-12 11:06:53.005840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.041 [2024-07-12 11:06:53.013205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.041 [2024-07-12 11:06:53.013223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.041 [2024-07-12 11:06:53.013229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:36.041 [2024-07-12 11:06:53.021848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.041 [2024-07-12 11:06:53.021865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.041 [2024-07-12 11:06:53.021871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:36.302 [2024-07-12 11:06:53.028896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.302 [2024-07-12 11:06:53.028914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.302 [2024-07-12 11:06:53.028923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:36.302 [2024-07-12 11:06:53.035754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.302 [2024-07-12 11:06:53.035772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.302 [2024-07-12 11:06:53.035778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.302 [2024-07-12 11:06:53.042382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.302 [2024-07-12 11:06:53.042400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.302 [2024-07-12 11:06:53.042406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:36.302 [2024-07-12 11:06:53.048909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.302 [2024-07-12 11:06:53.048927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.302 [2024-07-12 11:06:53.048933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:36.302 [2024-07-12 11:06:53.057077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.302 [2024-07-12 11:06:53.057095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.302 [2024-07-12 11:06:53.057101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:36.302 [2024-07-12 11:06:53.066816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.302 [2024-07-12 11:06:53.066834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.302 [2024-07-12 11:06:53.066841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.303 [2024-07-12 11:06:53.074959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.303 [2024-07-12 11:06:53.074977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.303 [2024-07-12 11:06:53.074984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:36.303 [2024-07-12 11:06:53.083616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.303 [2024-07-12 11:06:53.083634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.303 [2024-07-12 11:06:53.083640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:36.303 [2024-07-12 11:06:53.092297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.303 [2024-07-12 11:06:53.092314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.303 [2024-07-12 11:06:53.092321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:36.303 [2024-07-12 11:06:53.102625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.303 [2024-07-12 11:06:53.102646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.303 [2024-07-12 11:06:53.102652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.303 [2024-07-12 11:06:53.115814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.303 [2024-07-12 11:06:53.115832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.303 [2024-07-12 11:06:53.115838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:36.303 [2024-07-12 11:06:53.129684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.303 [2024-07-12 11:06:53.129702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.303 [2024-07-12 11:06:53.129708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:36.303 [2024-07-12 11:06:53.142853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.303 [2024-07-12 11:06:53.142872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.303 [2024-07-12 11:06:53.142878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:36.303 [2024-07-12 11:06:53.155444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.303 [2024-07-12 11:06:53.155462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.303 [2024-07-12 11:06:53.155468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.303 [2024-07-12 11:06:53.167173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.303 [2024-07-12 11:06:53.167190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.303 [2024-07-12 11:06:53.167196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:36.303 [2024-07-12 11:06:53.180741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.303 [2024-07-12 11:06:53.180759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.303 [2024-07-12 11:06:53.180765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:36.303 [2024-07-12 11:06:53.191985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.303 [2024-07-12 11:06:53.192002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.303 [2024-07-12 11:06:53.192009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:36.303 [2024-07-12 11:06:53.204422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.303 [2024-07-12 11:06:53.204441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.303 [2024-07-12 11:06:53.204447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.303 [2024-07-12 11:06:53.215852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.303 [2024-07-12 11:06:53.215870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.303 [2024-07-12 11:06:53.215877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:36.303 [2024-07-12 11:06:53.227495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.303 [2024-07-12 11:06:53.227513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.303 [2024-07-12 11:06:53.227519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:36.303 [2024-07-12 11:06:53.237712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.303 [2024-07-12 11:06:53.237731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.303 [2024-07-12 11:06:53.237737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:36.303 [2024-07-12 11:06:53.248919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.303 [2024-07-12 11:06:53.248937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.303 [2024-07-12 11:06:53.248943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.303 [2024-07-12 11:06:53.261136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.303 [2024-07-12 11:06:53.261154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.303 [2024-07-12 11:06:53.261160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:36.303 [2024-07-12 11:06:53.270819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.303 [2024-07-12 11:06:53.270837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.303 [2024-07-12 11:06:53.270843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:36.303 [2024-07-12 11:06:53.282937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.303 [2024-07-12 11:06:53.282955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.303 [2024-07-12 11:06:53.282961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:36.564 [2024-07-12 11:06:53.294726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.564 [2024-07-12 11:06:53.294744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.564 [2024-07-12 11:06:53.294750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.564 [2024-07-12 11:06:53.304818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.564 [2024-07-12 11:06:53.304835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.564 [2024-07-12 11:06:53.304845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:36.564 [2024-07-12 11:06:53.316818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.564 [2024-07-12 11:06:53.316835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.564 [2024-07-12 11:06:53.316842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:36.564 [2024-07-12 11:06:53.325846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.564 [2024-07-12 11:06:53.325864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.564 [2024-07-12 11:06:53.325870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:36.564 [2024-07-12 11:06:53.337647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.564 [2024-07-12 11:06:53.337665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.564 [2024-07-12 11:06:53.337672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.564 [2024-07-12 11:06:53.350964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.564 [2024-07-12 11:06:53.350982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.564 [2024-07-12 11:06:53.350988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:36.564 [2024-07-12 11:06:53.361288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.564 [2024-07-12 11:06:53.361306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.564 [2024-07-12 11:06:53.361312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:36.564 [2024-07-12 11:06:53.371765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.564 [2024-07-12 11:06:53.371783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.564 [2024-07-12 11:06:53.371789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:36.564 [2024-07-12 11:06:53.382739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.564 [2024-07-12 11:06:53.382757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.564 [2024-07-12 11:06:53.382763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.565 [2024-07-12 11:06:53.392040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.565 [2024-07-12 11:06:53.392059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.565 [2024-07-12 11:06:53.392066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:36.565 [2024-07-12 11:06:53.401347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.565 [2024-07-12 11:06:53.401369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.565 [2024-07-12 11:06:53.401375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:36.565 [2024-07-12 11:06:53.410327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.565 [2024-07-12 11:06:53.410345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.565 [2024-07-12 11:06:53.410352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:36.565 [2024-07-12 11:06:53.420925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.565 [2024-07-12 11:06:53.420943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.565 [2024-07-12 11:06:53.420951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.565 [2024-07-12 11:06:53.431510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.565 [2024-07-12 11:06:53.431529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.565 [2024-07-12 11:06:53.431535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:36.565 [2024-07-12 11:06:53.443241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.565 [2024-07-12 11:06:53.443259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.565 [2024-07-12 11:06:53.443266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:36.565 [2024-07-12 11:06:53.453977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.565 [2024-07-12 11:06:53.453995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.565 [2024-07-12 11:06:53.454001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:36.565 [2024-07-12 11:06:53.467545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.565 [2024-07-12 11:06:53.467564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.565 [2024-07-12 11:06:53.467570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.565 [2024-07-12 11:06:53.481074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.565 [2024-07-12 11:06:53.481093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.565 [2024-07-12 11:06:53.481099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:36.565 [2024-07-12 11:06:53.493752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.565 [2024-07-12 11:06:53.493770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.565 [2024-07-12 11:06:53.493779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:36.565 [2024-07-12 11:06:53.504706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.565 [2024-07-12 11:06:53.504723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.565 [2024-07-12 11:06:53.504729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:36.565 [2024-07-12 11:06:53.515473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.565 [2024-07-12 11:06:53.515491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.565 [2024-07-12 11:06:53.515497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.565 [2024-07-12 11:06:53.525655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.565 [2024-07-12 11:06:53.525673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.565 [2024-07-12 11:06:53.525680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:36.565 [2024-07-12 11:06:53.537542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.565 [2024-07-12 11:06:53.537560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.565 [2024-07-12 11:06:53.537566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:36.826 [2024-07-12 11:06:53.547853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.826 [2024-07-12 11:06:53.547871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.826 [2024-07-12 11:06:53.547877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:36.826 [2024-07-12 11:06:53.558080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.826 [2024-07-12 11:06:53.558098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.826 [2024-07-12 11:06:53.558104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.826 [2024-07-12 11:06:53.570872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.826 [2024-07-12 11:06:53.570890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.826 [2024-07-12 11:06:53.570896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:36.826 [2024-07-12 11:06:53.582814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.826 [2024-07-12 11:06:53.582832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.826 [2024-07-12 11:06:53.582838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:36.826 [2024-07-12 11:06:53.595258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.826 [2024-07-12 11:06:53.595279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.826 [2024-07-12 11:06:53.595285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:36.826 [2024-07-12 11:06:53.606684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.826 [2024-07-12 11:06:53.606702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.826 [2024-07-12 11:06:53.606707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.826 [2024-07-12 11:06:53.614965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.826 [2024-07-12 11:06:53.614983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.826 [2024-07-12 11:06:53.614989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:36.826 [2024-07-12 11:06:53.626863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16f19e0) 00:28:36.826 [2024-07-12 11:06:53.626881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.826 [2024-07-12 11:06:53.626888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:36.826 00:28:36.826 Latency(us) 00:28:36.826 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.826 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:36.826 nvme0n1 : 2.00 3696.43 462.05 0.00 0.00 4325.43 1140.05 14090.24 00:28:36.826 =================================================================================================================== 00:28:36.826 Total : 3696.43 462.05 0.00 0.00 4325.43 1140.05 14090.24 00:28:36.826 0 00:28:36.826 11:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:36.826 11:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:36.826 11:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:36.826 | .driver_specific 00:28:36.826 | .nvme_error 00:28:36.826 | .status_code 00:28:36.826 | .command_transient_transport_error' 00:28:36.826 11:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:37.087 11:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 238 > 0 )) 00:28:37.087 11:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2270648 00:28:37.087 11:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2270648 ']' 00:28:37.087 11:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2270648 00:28:37.087 11:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:37.087 11:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:37.087 11:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2270648 00:28:37.087 11:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:37.087 11:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:37.087 11:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2270648' 00:28:37.087 killing process with pid 2270648 00:28:37.087 11:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2270648 00:28:37.087 Received shutdown signal, test time was about 2.000000 seconds 00:28:37.087 00:28:37.087 Latency(us) 00:28:37.087 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:37.087 =================================================================================================================== 00:28:37.087 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:37.087 11:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2270648 00:28:37.087 11:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:37.087 11:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:37.087 11:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:37.087 11:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:37.087 11:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:37.087 11:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2271324 00:28:37.087 11:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2271324 /var/tmp/bperf.sock 00:28:37.087 11:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2271324 ']' 00:28:37.087 11:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:37.087 11:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:37.087 11:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:37.087 11:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:37.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:37.087 11:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:37.087 11:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:37.087 [2024-07-12 11:06:54.038317] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:37.087 [2024-07-12 11:06:54.038372] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2271324 ] 00:28:37.087 EAL: No free 2048 kB hugepages reported on node 1 00:28:37.348 [2024-07-12 11:06:54.111243] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.348 [2024-07-12 11:06:54.163755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:37.918 11:06:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:37.919 11:06:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:37.919 11:06:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:37.919 11:06:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:38.180 11:06:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:38.180 11:06:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.180 11:06:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:38.180 11:06:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.180 11:06:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:38.180 11:06:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:38.441 nvme0n1 00:28:38.441 11:06:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:38.441 11:06:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.441 11:06:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:38.441 11:06:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.441 11:06:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:38.441 11:06:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:38.441 Running I/O for 2 seconds... 00:28:38.441 [2024-07-12 11:06:55.391353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f35f0 00:28:38.441 [2024-07-12 11:06:55.392246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.441 [2024-07-12 11:06:55.392273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.441 [2024-07-12 11:06:55.399875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:38.441 [2024-07-12 11:06:55.400792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.441 [2024-07-12 11:06:55.400809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.441 [2024-07-12 11:06:55.408376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f35f0 00:28:38.441 [2024-07-12 11:06:55.409286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.441 [2024-07-12 11:06:55.409302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.441 [2024-07-12 11:06:55.416816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:38.441 [2024-07-12 11:06:55.417732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.441 [2024-07-12 11:06:55.417748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.703 [2024-07-12 11:06:55.425377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f35f0 00:28:38.703 [2024-07-12 11:06:55.426285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.703 [2024-07-12 11:06:55.426300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.703 [2024-07-12 11:06:55.433871] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:38.703 [2024-07-12 11:06:55.434778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.703 [2024-07-12 11:06:55.434794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.703 [2024-07-12 11:06:55.442252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f35f0 00:28:38.703 [2024-07-12 11:06:55.443166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.703 [2024-07-12 11:06:55.443181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.703 [2024-07-12 11:06:55.450674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:38.703 [2024-07-12 11:06:55.451580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.703 [2024-07-12 11:06:55.451596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.703 [2024-07-12 11:06:55.459093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f35f0 00:28:38.703 [2024-07-12 11:06:55.460007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.703 [2024-07-12 11:06:55.460023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.703 [2024-07-12 11:06:55.467515] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:38.703 [2024-07-12 11:06:55.468438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.703 [2024-07-12 11:06:55.468453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.703 [2024-07-12 11:06:55.476102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f35f0 00:28:38.703 [2024-07-12 11:06:55.477014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.703 [2024-07-12 11:06:55.477030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.703 [2024-07-12 11:06:55.484471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:38.703 [2024-07-12 11:06:55.485390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.703 [2024-07-12 11:06:55.485405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.703 [2024-07-12 11:06:55.492854] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f35f0 00:28:38.703 [2024-07-12 11:06:55.493765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.703 [2024-07-12 11:06:55.493780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.703 [2024-07-12 11:06:55.501265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:38.703 [2024-07-12 11:06:55.502176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.703 [2024-07-12 11:06:55.502191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.703 [2024-07-12 11:06:55.509655] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f35f0 00:28:38.703 [2024-07-12 11:06:55.510566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.703 [2024-07-12 11:06:55.510581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.703 [2024-07-12 11:06:55.517940] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:38.703 [2024-07-12 11:06:55.518732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.703 [2024-07-12 11:06:55.518747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.703 [2024-07-12 11:06:55.526325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:38.703 [2024-07-12 11:06:55.527036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.703 [2024-07-12 11:06:55.527051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.703 [2024-07-12 11:06:55.534692] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:38.703 [2024-07-12 11:06:55.535546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.703 [2024-07-12 11:06:55.535561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.703 [2024-07-12 11:06:55.543054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:38.703 [2024-07-12 11:06:55.543907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.703 [2024-07-12 11:06:55.543922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.703 [2024-07-12 11:06:55.551455] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:38.703 [2024-07-12 11:06:55.552313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.703 [2024-07-12 11:06:55.552328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.703 [2024-07-12 11:06:55.559840] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:38.703 [2024-07-12 11:06:55.560694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.703 [2024-07-12 11:06:55.560709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.703 [2024-07-12 11:06:55.568218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:38.703 [2024-07-12 11:06:55.569068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.703 [2024-07-12 11:06:55.569083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.703 [2024-07-12 11:06:55.576571] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:38.703 [2024-07-12 11:06:55.577389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.703 [2024-07-12 11:06:55.577404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.703 [2024-07-12 11:06:55.584924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:38.703 [2024-07-12 11:06:55.585776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.703 [2024-07-12 11:06:55.585794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.703 [2024-07-12 11:06:55.593292] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:38.703 [2024-07-12 11:06:55.594150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.703 [2024-07-12 11:06:55.594165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.703 [2024-07-12 11:06:55.601651] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:38.703 [2024-07-12 11:06:55.602497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.703 [2024-07-12 11:06:55.602513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.703 [2024-07-12 11:06:55.610007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:38.704 [2024-07-12 11:06:55.610855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.704 [2024-07-12 11:06:55.610870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.704 [2024-07-12 11:06:55.618376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:38.704 [2024-07-12 11:06:55.619220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.704 [2024-07-12 11:06:55.619235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.704 [2024-07-12 11:06:55.626724] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:38.704 [2024-07-12 11:06:55.627572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.704 [2024-07-12 11:06:55.627587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.704 [2024-07-12 11:06:55.635104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:38.704 [2024-07-12 11:06:55.635956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.704 [2024-07-12 11:06:55.635971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.704 [2024-07-12 11:06:55.643478] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:38.704 [2024-07-12 11:06:55.644319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.704 [2024-07-12 11:06:55.644334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.704 [2024-07-12 11:06:55.651852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:38.704 [2024-07-12 11:06:55.652705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.704 [2024-07-12 11:06:55.652720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.704 [2024-07-12 11:06:55.660228] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:38.704 [2024-07-12 11:06:55.661084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.704 [2024-07-12 11:06:55.661099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.704 [2024-07-12 11:06:55.668588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:38.704 [2024-07-12 11:06:55.669447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.704 [2024-07-12 11:06:55.669462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.704 [2024-07-12 11:06:55.676951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:38.704 [2024-07-12 11:06:55.677801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.704 [2024-07-12 11:06:55.677816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.704 [2024-07-12 11:06:55.685328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:38.965 [2024-07-12 11:06:55.686172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.965 [2024-07-12 11:06:55.686187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.965 [2024-07-12 11:06:55.693699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:38.965 [2024-07-12 11:06:55.694575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.965 [2024-07-12 11:06:55.694589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.965 [2024-07-12 11:06:55.702064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:38.965 [2024-07-12 11:06:55.702921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.965 [2024-07-12 11:06:55.702936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.965 [2024-07-12 11:06:55.710423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:38.965 [2024-07-12 11:06:55.711245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.965 [2024-07-12 11:06:55.711260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.965 [2024-07-12 11:06:55.718790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:38.965 [2024-07-12 11:06:55.719650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.965 [2024-07-12 11:06:55.719665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.965 [2024-07-12 11:06:55.727158] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:38.965 [2024-07-12 11:06:55.727993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.965 [2024-07-12 11:06:55.728008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.965 [2024-07-12 11:06:55.735533] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:38.965 [2024-07-12 11:06:55.736346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.965 [2024-07-12 11:06:55.736361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.965 [2024-07-12 11:06:55.743908] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:38.965 [2024-07-12 11:06:55.744761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.965 [2024-07-12 11:06:55.744776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.965 [2024-07-12 11:06:55.752294] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:38.965 [2024-07-12 11:06:55.753150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.965 [2024-07-12 11:06:55.753164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.965 [2024-07-12 11:06:55.760668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:38.965 [2024-07-12 11:06:55.761520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.965 [2024-07-12 11:06:55.761535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.965 [2024-07-12 11:06:55.769023] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:38.965 [2024-07-12 11:06:55.769847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.965 [2024-07-12 11:06:55.769861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.965 [2024-07-12 11:06:55.777419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:38.965 [2024-07-12 11:06:55.778247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.965 [2024-07-12 11:06:55.778261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.965 [2024-07-12 11:06:55.785781] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:38.965 [2024-07-12 11:06:55.786638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.965 [2024-07-12 11:06:55.786653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.965 [2024-07-12 11:06:55.794151] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:38.965 [2024-07-12 11:06:55.795002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.965 [2024-07-12 11:06:55.795017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.965 [2024-07-12 11:06:55.802508] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:38.965 [2024-07-12 11:06:55.803341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.965 [2024-07-12 11:06:55.803358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.965 [2024-07-12 11:06:55.810860] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:38.965 [2024-07-12 11:06:55.811672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.966 [2024-07-12 11:06:55.811687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.966 [2024-07-12 11:06:55.819246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:38.966 [2024-07-12 11:06:55.820102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.966 [2024-07-12 11:06:55.820117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.966 [2024-07-12 11:06:55.827627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:38.966 [2024-07-12 11:06:55.828470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.966 [2024-07-12 11:06:55.828485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.966 [2024-07-12 11:06:55.835997] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:38.966 [2024-07-12 11:06:55.836854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.966 [2024-07-12 11:06:55.836869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.966 [2024-07-12 11:06:55.844368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:38.966 [2024-07-12 11:06:55.845216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.966 [2024-07-12 11:06:55.845231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.966 [2024-07-12 11:06:55.852719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:38.966 [2024-07-12 11:06:55.853574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.966 [2024-07-12 11:06:55.853589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.966 [2024-07-12 11:06:55.861072] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:38.966 [2024-07-12 11:06:55.861926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.966 [2024-07-12 11:06:55.861941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.966 [2024-07-12 11:06:55.869460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:38.966 [2024-07-12 11:06:55.870309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.966 [2024-07-12 11:06:55.870324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.966 [2024-07-12 11:06:55.877833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:38.966 [2024-07-12 11:06:55.878695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.966 [2024-07-12 11:06:55.878711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.966 [2024-07-12 11:06:55.886222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:38.966 [2024-07-12 11:06:55.887075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.966 [2024-07-12 11:06:55.887090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.966 [2024-07-12 11:06:55.894597] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:38.966 [2024-07-12 11:06:55.895433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.966 [2024-07-12 11:06:55.895448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.966 [2024-07-12 11:06:55.902938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:38.966 [2024-07-12 11:06:55.903794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.966 [2024-07-12 11:06:55.903808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.966 [2024-07-12 11:06:55.911310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:38.966 [2024-07-12 11:06:55.912159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.966 [2024-07-12 11:06:55.912174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.966 [2024-07-12 11:06:55.919673] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:38.966 [2024-07-12 11:06:55.920533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.966 [2024-07-12 11:06:55.920548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.966 [2024-07-12 11:06:55.928025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:38.966 [2024-07-12 11:06:55.928878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.966 [2024-07-12 11:06:55.928893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.966 [2024-07-12 11:06:55.936390] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:38.966 [2024-07-12 11:06:55.937244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.966 [2024-07-12 11:06:55.937259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:38.966 [2024-07-12 11:06:55.944740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:38.966 [2024-07-12 11:06:55.945586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.966 [2024-07-12 11:06:55.945601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.228 [2024-07-12 11:06:55.953104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.228 [2024-07-12 11:06:55.953944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.228 [2024-07-12 11:06:55.953959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.228 [2024-07-12 11:06:55.961488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.228 [2024-07-12 11:06:55.962340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.228 [2024-07-12 11:06:55.962355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.228 [2024-07-12 11:06:55.969878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.228 [2024-07-12 11:06:55.970707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.228 [2024-07-12 11:06:55.970722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.228 [2024-07-12 11:06:55.978245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.228 [2024-07-12 11:06:55.979096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.228 [2024-07-12 11:06:55.979111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.228 [2024-07-12 11:06:55.986579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.228 [2024-07-12 11:06:55.987430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.228 [2024-07-12 11:06:55.987445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.228 [2024-07-12 11:06:55.994950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.228 [2024-07-12 11:06:55.995798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.228 [2024-07-12 11:06:55.995813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.228 [2024-07-12 11:06:56.003322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.228 [2024-07-12 11:06:56.004167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.228 [2024-07-12 11:06:56.004182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.228 [2024-07-12 11:06:56.011726] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.228 [2024-07-12 11:06:56.012579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.228 [2024-07-12 11:06:56.012594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.228 [2024-07-12 11:06:56.020078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.228 [2024-07-12 11:06:56.020893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.228 [2024-07-12 11:06:56.020908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.228 [2024-07-12 11:06:56.028448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.228 [2024-07-12 11:06:56.029252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.228 [2024-07-12 11:06:56.029267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.228 [2024-07-12 11:06:56.036808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.228 [2024-07-12 11:06:56.037657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.228 [2024-07-12 11:06:56.037672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.228 [2024-07-12 11:06:56.045173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.228 [2024-07-12 11:06:56.046005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.228 [2024-07-12 11:06:56.046020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.228 [2024-07-12 11:06:56.053542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.228 [2024-07-12 11:06:56.054398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.228 [2024-07-12 11:06:56.054413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.228 [2024-07-12 11:06:56.061899] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.228 [2024-07-12 11:06:56.062753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.228 [2024-07-12 11:06:56.062767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.228 [2024-07-12 11:06:56.070263] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.228 [2024-07-12 11:06:56.071114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.228 [2024-07-12 11:06:56.071131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.228 [2024-07-12 11:06:56.078611] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.229 [2024-07-12 11:06:56.079459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.229 [2024-07-12 11:06:56.079474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.229 [2024-07-12 11:06:56.086955] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.229 [2024-07-12 11:06:56.087811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.229 [2024-07-12 11:06:56.087826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.229 [2024-07-12 11:06:56.095342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.229 [2024-07-12 11:06:56.096195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.229 [2024-07-12 11:06:56.096214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.229 [2024-07-12 11:06:56.103696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.229 [2024-07-12 11:06:56.104552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.229 [2024-07-12 11:06:56.104567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.229 [2024-07-12 11:06:56.112052] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.229 [2024-07-12 11:06:56.112910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.229 [2024-07-12 11:06:56.112925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.229 [2024-07-12 11:06:56.120412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.229 [2024-07-12 11:06:56.121244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.229 [2024-07-12 11:06:56.121259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.229 [2024-07-12 11:06:56.128757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.229 [2024-07-12 11:06:56.129615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.229 [2024-07-12 11:06:56.129630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.229 [2024-07-12 11:06:56.137114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.229 [2024-07-12 11:06:56.137966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.229 [2024-07-12 11:06:56.137980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.229 [2024-07-12 11:06:56.145485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.229 [2024-07-12 11:06:56.146338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.229 [2024-07-12 11:06:56.146353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.229 [2024-07-12 11:06:56.153888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.229 [2024-07-12 11:06:56.154739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.229 [2024-07-12 11:06:56.154754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.229 [2024-07-12 11:06:56.162249] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.229 [2024-07-12 11:06:56.163102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.229 [2024-07-12 11:06:56.163116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.229 [2024-07-12 11:06:56.170602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.229 [2024-07-12 11:06:56.171450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.229 [2024-07-12 11:06:56.171465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.229 [2024-07-12 11:06:56.178962] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.229 [2024-07-12 11:06:56.179816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.229 [2024-07-12 11:06:56.179831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.229 [2024-07-12 11:06:56.187349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.229 [2024-07-12 11:06:56.188200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.229 [2024-07-12 11:06:56.188215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.229 [2024-07-12 11:06:56.195720] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.229 [2024-07-12 11:06:56.196575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.229 [2024-07-12 11:06:56.196590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.229 [2024-07-12 11:06:56.204082] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.229 [2024-07-12 11:06:56.204936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.229 [2024-07-12 11:06:56.204951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.491 [2024-07-12 11:06:56.212431] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.491 [2024-07-12 11:06:56.213251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.491 [2024-07-12 11:06:56.213266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.491 [2024-07-12 11:06:56.220801] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.491 [2024-07-12 11:06:56.221649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.491 [2024-07-12 11:06:56.221664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.491 [2024-07-12 11:06:56.229179] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.491 [2024-07-12 11:06:56.230035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.491 [2024-07-12 11:06:56.230050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.491 [2024-07-12 11:06:56.237574] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.491 [2024-07-12 11:06:56.238430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.491 [2024-07-12 11:06:56.238445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.491 [2024-07-12 11:06:56.245970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.491 [2024-07-12 11:06:56.246828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.491 [2024-07-12 11:06:56.246843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.491 [2024-07-12 11:06:56.254343] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.491 [2024-07-12 11:06:56.255187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.491 [2024-07-12 11:06:56.255203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.491 [2024-07-12 11:06:56.262718] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.491 [2024-07-12 11:06:56.263569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.491 [2024-07-12 11:06:56.263584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.491 [2024-07-12 11:06:56.271055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.491 [2024-07-12 11:06:56.271914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.491 [2024-07-12 11:06:56.271929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.492 [2024-07-12 11:06:56.279441] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.492 [2024-07-12 11:06:56.280256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.492 [2024-07-12 11:06:56.280271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.492 [2024-07-12 11:06:56.287818] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.492 [2024-07-12 11:06:56.288631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.492 [2024-07-12 11:06:56.288645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.492 [2024-07-12 11:06:56.296211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.492 [2024-07-12 11:06:56.297054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.492 [2024-07-12 11:06:56.297069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.492 [2024-07-12 11:06:56.304592] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.492 [2024-07-12 11:06:56.305415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.492 [2024-07-12 11:06:56.305430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.492 [2024-07-12 11:06:56.312976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.492 [2024-07-12 11:06:56.313829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.492 [2024-07-12 11:06:56.313847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.492 [2024-07-12 11:06:56.321355] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.492 [2024-07-12 11:06:56.322205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.492 [2024-07-12 11:06:56.322220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.492 [2024-07-12 11:06:56.329728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.492 [2024-07-12 11:06:56.330552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.492 [2024-07-12 11:06:56.330567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.492 [2024-07-12 11:06:56.338102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.492 [2024-07-12 11:06:56.338956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.492 [2024-07-12 11:06:56.338971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.492 [2024-07-12 11:06:56.346502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.492 [2024-07-12 11:06:56.347354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.492 [2024-07-12 11:06:56.347369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.492 [2024-07-12 11:06:56.354855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.492 [2024-07-12 11:06:56.355703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.492 [2024-07-12 11:06:56.355718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.492 [2024-07-12 11:06:56.363223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.492 [2024-07-12 11:06:56.364072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.492 [2024-07-12 11:06:56.364087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.492 [2024-07-12 11:06:56.371611] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.492 [2024-07-12 11:06:56.372477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.492 [2024-07-12 11:06:56.372492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.492 [2024-07-12 11:06:56.380010] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.492 [2024-07-12 11:06:56.380753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.492 [2024-07-12 11:06:56.380769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.492 [2024-07-12 11:06:56.388388] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.492 [2024-07-12 11:06:56.389241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.492 [2024-07-12 11:06:56.389257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.492 [2024-07-12 11:06:56.396754] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.492 [2024-07-12 11:06:56.397608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.492 [2024-07-12 11:06:56.397624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.492 [2024-07-12 11:06:56.405127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.492 [2024-07-12 11:06:56.405976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.492 [2024-07-12 11:06:56.405991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.492 [2024-07-12 11:06:56.413511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.492 [2024-07-12 11:06:56.414343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.492 [2024-07-12 11:06:56.414358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.492 [2024-07-12 11:06:56.421883] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.492 [2024-07-12 11:06:56.422732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.492 [2024-07-12 11:06:56.422747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.492 [2024-07-12 11:06:56.430339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.492 [2024-07-12 11:06:56.431184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.492 [2024-07-12 11:06:56.431199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.492 [2024-07-12 11:06:56.438721] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.492 [2024-07-12 11:06:56.439574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.492 [2024-07-12 11:06:56.439590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.492 [2024-07-12 11:06:56.447075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.492 [2024-07-12 11:06:56.447930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.492 [2024-07-12 11:06:56.447945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.492 [2024-07-12 11:06:56.455558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.492 [2024-07-12 11:06:56.456414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.492 [2024-07-12 11:06:56.456429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.492 [2024-07-12 11:06:56.463953] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.492 [2024-07-12 11:06:56.464810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.492 [2024-07-12 11:06:56.464825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.492 [2024-07-12 11:06:56.472486] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.492 [2024-07-12 11:06:56.473345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.492 [2024-07-12 11:06:56.473359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.753 [2024-07-12 11:06:56.480887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.753 [2024-07-12 11:06:56.481737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.753 [2024-07-12 11:06:56.481752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.753 [2024-07-12 11:06:56.489265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.753 [2024-07-12 11:06:56.490126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.753 [2024-07-12 11:06:56.490141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.753 [2024-07-12 11:06:56.497644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.753 [2024-07-12 11:06:56.498496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.753 [2024-07-12 11:06:56.498511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.753 [2024-07-12 11:06:56.506022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.753 [2024-07-12 11:06:56.506879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.753 [2024-07-12 11:06:56.506894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.753 [2024-07-12 11:06:56.514408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.753 [2024-07-12 11:06:56.515247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.753 [2024-07-12 11:06:56.515262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.753 [2024-07-12 11:06:56.522824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.753 [2024-07-12 11:06:56.523680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.753 [2024-07-12 11:06:56.523695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.753 [2024-07-12 11:06:56.531224] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.753 [2024-07-12 11:06:56.532069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.753 [2024-07-12 11:06:56.532087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.753 [2024-07-12 11:06:56.539589] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.753 [2024-07-12 11:06:56.540453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.753 [2024-07-12 11:06:56.540468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.753 [2024-07-12 11:06:56.547961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.753 [2024-07-12 11:06:56.548817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.753 [2024-07-12 11:06:56.548832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.753 [2024-07-12 11:06:56.556340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.753 [2024-07-12 11:06:56.557187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.753 [2024-07-12 11:06:56.557202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.753 [2024-07-12 11:06:56.564717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.754 [2024-07-12 11:06:56.565529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.754 [2024-07-12 11:06:56.565543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.754 [2024-07-12 11:06:56.573087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.754 [2024-07-12 11:06:56.573899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.754 [2024-07-12 11:06:56.573914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.754 [2024-07-12 11:06:56.581460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.754 [2024-07-12 11:06:56.582270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.754 [2024-07-12 11:06:56.582284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.754 [2024-07-12 11:06:56.589876] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.754 [2024-07-12 11:06:56.590727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.754 [2024-07-12 11:06:56.590742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.754 [2024-07-12 11:06:56.598255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.754 [2024-07-12 11:06:56.599097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.754 [2024-07-12 11:06:56.599111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.754 [2024-07-12 11:06:56.606615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.754 [2024-07-12 11:06:56.607452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.754 [2024-07-12 11:06:56.607468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.754 [2024-07-12 11:06:56.614987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.754 [2024-07-12 11:06:56.615851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.754 [2024-07-12 11:06:56.615866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.754 [2024-07-12 11:06:56.623353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.754 [2024-07-12 11:06:56.624200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.754 [2024-07-12 11:06:56.624215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.754 [2024-07-12 11:06:56.631702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.754 [2024-07-12 11:06:56.632556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.754 [2024-07-12 11:06:56.632571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.754 [2024-07-12 11:06:56.640130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.754 [2024-07-12 11:06:56.640979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.754 [2024-07-12 11:06:56.640994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.754 [2024-07-12 11:06:56.648510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.754 [2024-07-12 11:06:56.649377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.754 [2024-07-12 11:06:56.649393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.754 [2024-07-12 11:06:56.656898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.754 [2024-07-12 11:06:56.657719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.754 [2024-07-12 11:06:56.657734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.754 [2024-07-12 11:06:56.665314] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.754 [2024-07-12 11:06:56.666168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.754 [2024-07-12 11:06:56.666182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.754 [2024-07-12 11:06:56.673683] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.754 [2024-07-12 11:06:56.674548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.754 [2024-07-12 11:06:56.674563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.754 [2024-07-12 11:06:56.682032] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.754 [2024-07-12 11:06:56.682888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.754 [2024-07-12 11:06:56.682903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.754 [2024-07-12 11:06:56.690411] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.754 [2024-07-12 11:06:56.691130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.754 [2024-07-12 11:06:56.691145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.754 [2024-07-12 11:06:56.698769] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.754 [2024-07-12 11:06:56.699622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.754 [2024-07-12 11:06:56.699637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.754 [2024-07-12 11:06:56.707149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.754 [2024-07-12 11:06:56.707996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.754 [2024-07-12 11:06:56.708010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.754 [2024-07-12 11:06:56.715529] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.754 [2024-07-12 11:06:56.716341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.754 [2024-07-12 11:06:56.716356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.754 [2024-07-12 11:06:56.723886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:39.754 [2024-07-12 11:06:56.724740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.754 [2024-07-12 11:06:56.724755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.754 [2024-07-12 11:06:56.732271] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:39.754 [2024-07-12 11:06:56.733121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.754 [2024-07-12 11:06:56.733139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.016 [2024-07-12 11:06:56.740652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.016 [2024-07-12 11:06:56.741507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.016 [2024-07-12 11:06:56.741523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.016 [2024-07-12 11:06:56.749040] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.016 [2024-07-12 11:06:56.749934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.016 [2024-07-12 11:06:56.749952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.016 [2024-07-12 11:06:56.757448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.016 [2024-07-12 11:06:56.758300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.016 [2024-07-12 11:06:56.758315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.016 [2024-07-12 11:06:56.765819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.016 [2024-07-12 11:06:56.766645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.016 [2024-07-12 11:06:56.766660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.016 [2024-07-12 11:06:56.774175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.016 [2024-07-12 11:06:56.775021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.016 [2024-07-12 11:06:56.775036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.016 [2024-07-12 11:06:56.782539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.016 [2024-07-12 11:06:56.783392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.016 [2024-07-12 11:06:56.783407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.016 [2024-07-12 11:06:56.790901] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.016 [2024-07-12 11:06:56.791757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.016 [2024-07-12 11:06:56.791772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.016 [2024-07-12 11:06:56.799311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.016 [2024-07-12 11:06:56.800135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.016 [2024-07-12 11:06:56.800150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.016 [2024-07-12 11:06:56.807670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.016 [2024-07-12 11:06:56.808525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.016 [2024-07-12 11:06:56.808540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.016 [2024-07-12 11:06:56.816027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.016 [2024-07-12 11:06:56.816877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.016 [2024-07-12 11:06:56.816893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.016 [2024-07-12 11:06:56.824418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.016 [2024-07-12 11:06:56.825275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.016 [2024-07-12 11:06:56.825290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.016 [2024-07-12 11:06:56.832803] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.016 [2024-07-12 11:06:56.833668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.016 [2024-07-12 11:06:56.833684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.016 [2024-07-12 11:06:56.841177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.016 [2024-07-12 11:06:56.842028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.016 [2024-07-12 11:06:56.842043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.016 [2024-07-12 11:06:56.849542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.016 [2024-07-12 11:06:56.850356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.016 [2024-07-12 11:06:56.850371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.016 [2024-07-12 11:06:56.857899] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.016 [2024-07-12 11:06:56.858748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.016 [2024-07-12 11:06:56.858762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.016 [2024-07-12 11:06:56.866271] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.016 [2024-07-12 11:06:56.867128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.016 [2024-07-12 11:06:56.867143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.016 [2024-07-12 11:06:56.874672] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.016 [2024-07-12 11:06:56.875537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.016 [2024-07-12 11:06:56.875552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.016 [2024-07-12 11:06:56.883041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.016 [2024-07-12 11:06:56.883896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.016 [2024-07-12 11:06:56.883911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.016 [2024-07-12 11:06:56.891418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.016 [2024-07-12 11:06:56.892268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.016 [2024-07-12 11:06:56.892283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.016 [2024-07-12 11:06:56.899768] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.016 [2024-07-12 11:06:56.900582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.016 [2024-07-12 11:06:56.900597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.016 [2024-07-12 11:06:56.908146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.016 [2024-07-12 11:06:56.908999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.016 [2024-07-12 11:06:56.909014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.016 [2024-07-12 11:06:56.916530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.016 [2024-07-12 11:06:56.917355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.016 [2024-07-12 11:06:56.917371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.016 [2024-07-12 11:06:56.924898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.016 [2024-07-12 11:06:56.925754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.016 [2024-07-12 11:06:56.925769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.016 [2024-07-12 11:06:56.933294] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.016 [2024-07-12 11:06:56.934101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.016 [2024-07-12 11:06:56.934116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.016 [2024-07-12 11:06:56.941660] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.016 [2024-07-12 11:06:56.942506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.016 [2024-07-12 11:06:56.942521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.016 [2024-07-12 11:06:56.950018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.016 [2024-07-12 11:06:56.950865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.016 [2024-07-12 11:06:56.950880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.016 [2024-07-12 11:06:56.958391] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.016 [2024-07-12 11:06:56.959202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.016 [2024-07-12 11:06:56.959217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.017 [2024-07-12 11:06:56.966773] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.017 [2024-07-12 11:06:56.967615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.017 [2024-07-12 11:06:56.967633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.017 [2024-07-12 11:06:56.975180] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.017 [2024-07-12 11:06:56.976025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.017 [2024-07-12 11:06:56.976040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.017 [2024-07-12 11:06:56.983545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.017 [2024-07-12 11:06:56.984361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.017 [2024-07-12 11:06:56.984376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.017 [2024-07-12 11:06:56.991908] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.017 [2024-07-12 11:06:56.992720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.017 [2024-07-12 11:06:56.992735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.278 [2024-07-12 11:06:57.000262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.278 [2024-07-12 11:06:57.001107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.278 [2024-07-12 11:06:57.001126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.278 [2024-07-12 11:06:57.008634] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.278 [2024-07-12 11:06:57.009472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.278 [2024-07-12 11:06:57.009488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.278 [2024-07-12 11:06:57.017018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.278 [2024-07-12 11:06:57.017838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.278 [2024-07-12 11:06:57.017853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.278 [2024-07-12 11:06:57.025393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.278 [2024-07-12 11:06:57.026238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.278 [2024-07-12 11:06:57.026253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.278 [2024-07-12 11:06:57.033763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.278 [2024-07-12 11:06:57.034619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.278 [2024-07-12 11:06:57.034635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.278 [2024-07-12 11:06:57.042106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.278 [2024-07-12 11:06:57.042962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.278 [2024-07-12 11:06:57.042977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.278 [2024-07-12 11:06:57.050474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.278 [2024-07-12 11:06:57.051322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.278 [2024-07-12 11:06:57.051337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.278 [2024-07-12 11:06:57.058852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.278 [2024-07-12 11:06:57.059705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.278 [2024-07-12 11:06:57.059720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.278 [2024-07-12 11:06:57.067209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.278 [2024-07-12 11:06:57.068055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.278 [2024-07-12 11:06:57.068069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.278 [2024-07-12 11:06:57.075570] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.278 [2024-07-12 11:06:57.076387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.278 [2024-07-12 11:06:57.076402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.278 [2024-07-12 11:06:57.083926] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.278 [2024-07-12 11:06:57.084778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.278 [2024-07-12 11:06:57.084793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.278 [2024-07-12 11:06:57.092272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.278 [2024-07-12 11:06:57.093124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.278 [2024-07-12 11:06:57.093139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.278 [2024-07-12 11:06:57.100646] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.278 [2024-07-12 11:06:57.101463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.278 [2024-07-12 11:06:57.101478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.278 [2024-07-12 11:06:57.109020] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.279 [2024-07-12 11:06:57.109873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.279 [2024-07-12 11:06:57.109887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.279 [2024-07-12 11:06:57.117389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.279 [2024-07-12 11:06:57.118231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.279 [2024-07-12 11:06:57.118246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.279 [2024-07-12 11:06:57.125737] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.279 [2024-07-12 11:06:57.126592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.279 [2024-07-12 11:06:57.126607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.279 [2024-07-12 11:06:57.134074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.279 [2024-07-12 11:06:57.134936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.279 [2024-07-12 11:06:57.134951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.279 [2024-07-12 11:06:57.142436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.279 [2024-07-12 11:06:57.143282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.279 [2024-07-12 11:06:57.143296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.279 [2024-07-12 11:06:57.150792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.279 [2024-07-12 11:06:57.151651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.279 [2024-07-12 11:06:57.151665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.279 [2024-07-12 11:06:57.159158] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.279 [2024-07-12 11:06:57.160009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.279 [2024-07-12 11:06:57.160024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.279 [2024-07-12 11:06:57.167525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.279 [2024-07-12 11:06:57.168355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.279 [2024-07-12 11:06:57.168370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.279 [2024-07-12 11:06:57.175868] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.279 [2024-07-12 11:06:57.176723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.279 [2024-07-12 11:06:57.176739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.279 [2024-07-12 11:06:57.184199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.279 [2024-07-12 11:06:57.185004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.279 [2024-07-12 11:06:57.185021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.279 [2024-07-12 11:06:57.192586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.279 [2024-07-12 11:06:57.193454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.279 [2024-07-12 11:06:57.193468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.279 [2024-07-12 11:06:57.200943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.279 [2024-07-12 11:06:57.201802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.279 [2024-07-12 11:06:57.201818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.279 [2024-07-12 11:06:57.209315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.279 [2024-07-12 11:06:57.210169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.279 [2024-07-12 11:06:57.210184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.279 [2024-07-12 11:06:57.217675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.279 [2024-07-12 11:06:57.218489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.279 [2024-07-12 11:06:57.218504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.279 [2024-07-12 11:06:57.226029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.279 [2024-07-12 11:06:57.226845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.279 [2024-07-12 11:06:57.226860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.279 [2024-07-12 11:06:57.234397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.279 [2024-07-12 11:06:57.235243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.279 [2024-07-12 11:06:57.235257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.279 [2024-07-12 11:06:57.242749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.279 [2024-07-12 11:06:57.243598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.279 [2024-07-12 11:06:57.243613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.279 [2024-07-12 11:06:57.251114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.279 [2024-07-12 11:06:57.251968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.279 [2024-07-12 11:06:57.251982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.279 [2024-07-12 11:06:57.259503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.279 [2024-07-12 11:06:57.260344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.279 [2024-07-12 11:06:57.260358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.539 [2024-07-12 11:06:57.267857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.539 [2024-07-12 11:06:57.268699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.539 [2024-07-12 11:06:57.268713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.539 [2024-07-12 11:06:57.276206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.539 [2024-07-12 11:06:57.277057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.539 [2024-07-12 11:06:57.277072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.539 [2024-07-12 11:06:57.284578] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.540 [2024-07-12 11:06:57.285416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.540 [2024-07-12 11:06:57.285431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.540 [2024-07-12 11:06:57.292946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.540 [2024-07-12 11:06:57.293793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.540 [2024-07-12 11:06:57.293808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.540 [2024-07-12 11:06:57.301312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.540 [2024-07-12 11:06:57.302162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.540 [2024-07-12 11:06:57.302178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.540 [2024-07-12 11:06:57.309659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.540 [2024-07-12 11:06:57.310516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.540 [2024-07-12 11:06:57.310531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.540 [2024-07-12 11:06:57.318011] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.540 [2024-07-12 11:06:57.318858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.540 [2024-07-12 11:06:57.318873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.540 [2024-07-12 11:06:57.326366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.540 [2024-07-12 11:06:57.327212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.540 [2024-07-12 11:06:57.327227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.540 [2024-07-12 11:06:57.334736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.540 [2024-07-12 11:06:57.335591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.540 [2024-07-12 11:06:57.335606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.540 [2024-07-12 11:06:57.343102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.540 [2024-07-12 11:06:57.343952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.540 [2024-07-12 11:06:57.343967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.540 [2024-07-12 11:06:57.351491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.540 [2024-07-12 11:06:57.352338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.540 [2024-07-12 11:06:57.352353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.540 [2024-07-12 11:06:57.359839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.540 [2024-07-12 11:06:57.360693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.540 [2024-07-12 11:06:57.360708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.540 [2024-07-12 11:06:57.368192] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f9b30 00:28:40.540 [2024-07-12 11:06:57.369040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.540 [2024-07-12 11:06:57.369055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.540 [2024-07-12 11:06:57.376560] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1130aa0) with pdu=0x2000190f7da8 00:28:40.540 [2024-07-12 11:06:57.377424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.540 [2024-07-12 11:06:57.377439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.540 00:28:40.540 Latency(us) 00:28:40.540 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:40.540 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:40.540 nvme0n1 : 2.00 30396.19 118.74 0.00 0.00 4205.31 2225.49 12834.13 00:28:40.540 =================================================================================================================== 00:28:40.540 Total : 30396.19 118.74 0.00 0.00 4205.31 2225.49 12834.13 00:28:40.540 0 00:28:40.540 11:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:40.540 11:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:40.540 11:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:40.540 | .driver_specific 00:28:40.540 | .nvme_error 00:28:40.540 | .status_code 00:28:40.540 | .command_transient_transport_error' 00:28:40.540 11:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:40.800 11:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 238 > 0 )) 00:28:40.800 11:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2271324 00:28:40.800 11:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2271324 ']' 00:28:40.800 11:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2271324 00:28:40.800 11:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:40.800 11:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:40.800 11:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2271324 00:28:40.800 11:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:40.800 11:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:40.800 11:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2271324' 00:28:40.800 killing process with pid 2271324 00:28:40.800 11:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2271324 00:28:40.800 Received shutdown signal, test time was about 2.000000 seconds 00:28:40.800 00:28:40.800 Latency(us) 00:28:40.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:40.800 =================================================================================================================== 00:28:40.800 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:40.800 11:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2271324 00:28:40.800 11:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:40.800 11:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:40.800 11:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:40.800 11:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:40.800 11:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:40.800 11:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2272010 00:28:40.800 11:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2272010 /var/tmp/bperf.sock 00:28:40.800 11:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2272010 ']' 00:28:40.800 11:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:40.800 11:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:40.800 11:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:40.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:40.800 11:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:40.800 11:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:40.800 11:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:41.060 [2024-07-12 11:06:57.798054] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:41.060 [2024-07-12 11:06:57.798108] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2272010 ] 00:28:41.060 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:41.060 Zero copy mechanism will not be used. 00:28:41.060 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.060 [2024-07-12 11:06:57.871736] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.060 [2024-07-12 11:06:57.924838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:41.631 11:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:41.631 11:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:41.631 11:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:41.631 11:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:41.891 11:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:41.891 11:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.891 11:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:41.891 11:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.891 11:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:41.891 11:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:42.150 nvme0n1 00:28:42.151 11:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:42.151 11:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.151 11:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:42.151 11:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.151 11:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:42.151 11:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:42.151 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:42.151 Zero copy mechanism will not be used. 00:28:42.151 Running I/O for 2 seconds... 00:28:42.412 [2024-07-12 11:06:59.143386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.412 [2024-07-12 11:06:59.143801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.412 [2024-07-12 11:06:59.143827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.412 [2024-07-12 11:06:59.158058] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.412 [2024-07-12 11:06:59.158459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.412 [2024-07-12 11:06:59.158479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.412 [2024-07-12 11:06:59.169811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.412 [2024-07-12 11:06:59.170205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.412 [2024-07-12 11:06:59.170224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.412 [2024-07-12 11:06:59.181505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.412 [2024-07-12 11:06:59.181697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.412 [2024-07-12 11:06:59.181717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.412 [2024-07-12 11:06:59.191822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.412 [2024-07-12 11:06:59.192148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.412 [2024-07-12 11:06:59.192166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.412 [2024-07-12 11:06:59.202381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.412 [2024-07-12 11:06:59.202668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.412 [2024-07-12 11:06:59.202686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.412 [2024-07-12 11:06:59.209381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.412 [2024-07-12 11:06:59.209676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.412 [2024-07-12 11:06:59.209693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.412 [2024-07-12 11:06:59.216528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.412 [2024-07-12 11:06:59.216645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.412 [2024-07-12 11:06:59.216660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.412 [2024-07-12 11:06:59.225426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.412 [2024-07-12 11:06:59.225748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.412 [2024-07-12 11:06:59.225764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.412 [2024-07-12 11:06:59.236527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.412 [2024-07-12 11:06:59.236852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.412 [2024-07-12 11:06:59.236869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.412 [2024-07-12 11:06:59.247179] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.412 [2024-07-12 11:06:59.247519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.412 [2024-07-12 11:06:59.247535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.412 [2024-07-12 11:06:59.256793] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.412 [2024-07-12 11:06:59.256871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.412 [2024-07-12 11:06:59.256885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.412 [2024-07-12 11:06:59.266332] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.412 [2024-07-12 11:06:59.266475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.412 [2024-07-12 11:06:59.266490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.412 [2024-07-12 11:06:59.276853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.412 [2024-07-12 11:06:59.277068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.412 [2024-07-12 11:06:59.277084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.412 [2024-07-12 11:06:59.283699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.412 [2024-07-12 11:06:59.284027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.412 [2024-07-12 11:06:59.284043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.412 [2024-07-12 11:06:59.291240] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.412 [2024-07-12 11:06:59.291562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.412 [2024-07-12 11:06:59.291578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.412 [2024-07-12 11:06:59.300943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.412 [2024-07-12 11:06:59.301302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.412 [2024-07-12 11:06:59.301319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.412 [2024-07-12 11:06:59.308923] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.412 [2024-07-12 11:06:59.309259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.412 [2024-07-12 11:06:59.309276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.412 [2024-07-12 11:06:59.314759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.412 [2024-07-12 11:06:59.315095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.412 [2024-07-12 11:06:59.315112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.412 [2024-07-12 11:06:59.322048] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.413 [2024-07-12 11:06:59.322365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.413 [2024-07-12 11:06:59.322382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.413 [2024-07-12 11:06:59.330200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.413 [2024-07-12 11:06:59.330527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.413 [2024-07-12 11:06:59.330544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.413 [2024-07-12 11:06:59.336438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.413 [2024-07-12 11:06:59.336775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.413 [2024-07-12 11:06:59.336791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.413 [2024-07-12 11:06:59.342328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.413 [2024-07-12 11:06:59.342516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.413 [2024-07-12 11:06:59.342531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.413 [2024-07-12 11:06:59.352325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.413 [2024-07-12 11:06:59.352633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.413 [2024-07-12 11:06:59.352649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.413 [2024-07-12 11:06:59.358548] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.413 [2024-07-12 11:06:59.358763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.413 [2024-07-12 11:06:59.358779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.413 [2024-07-12 11:06:59.366860] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.413 [2024-07-12 11:06:59.367063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.413 [2024-07-12 11:06:59.367079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.413 [2024-07-12 11:06:59.372709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.413 [2024-07-12 11:06:59.373036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.413 [2024-07-12 11:06:59.373052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.413 [2024-07-12 11:06:59.379143] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.413 [2024-07-12 11:06:59.379455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.413 [2024-07-12 11:06:59.379471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.413 [2024-07-12 11:06:59.385113] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.413 [2024-07-12 11:06:59.385332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.413 [2024-07-12 11:06:59.385347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.413 [2024-07-12 11:06:59.390661] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.413 [2024-07-12 11:06:59.390866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.413 [2024-07-12 11:06:59.390884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.674 [2024-07-12 11:06:59.396417] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.674 [2024-07-12 11:06:59.396630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.674 [2024-07-12 11:06:59.396645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.674 [2024-07-12 11:06:59.402117] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.674 [2024-07-12 11:06:59.402328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.674 [2024-07-12 11:06:59.402343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.674 [2024-07-12 11:06:59.408140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.674 [2024-07-12 11:06:59.408494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.674 [2024-07-12 11:06:59.408510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.674 [2024-07-12 11:06:59.414566] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.674 [2024-07-12 11:06:59.414855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.674 [2024-07-12 11:06:59.414871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.674 [2024-07-12 11:06:59.420330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.674 [2024-07-12 11:06:59.420544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.674 [2024-07-12 11:06:59.420559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.674 [2024-07-12 11:06:59.426466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.674 [2024-07-12 11:06:59.426760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.674 [2024-07-12 11:06:59.426775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.674 [2024-07-12 11:06:59.434006] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.674 [2024-07-12 11:06:59.434224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.674 [2024-07-12 11:06:59.434239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.674 [2024-07-12 11:06:59.443079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.674 [2024-07-12 11:06:59.443378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.674 [2024-07-12 11:06:59.443394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.674 [2024-07-12 11:06:59.451431] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.674 [2024-07-12 11:06:59.451762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.674 [2024-07-12 11:06:59.451778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.674 [2024-07-12 11:06:59.461125] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.674 [2024-07-12 11:06:59.461446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.674 [2024-07-12 11:06:59.461462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.674 [2024-07-12 11:06:59.470397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.674 [2024-07-12 11:06:59.470778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.674 [2024-07-12 11:06:59.470794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.674 [2024-07-12 11:06:59.478408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.674 [2024-07-12 11:06:59.478756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.674 [2024-07-12 11:06:59.478772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.674 [2024-07-12 11:06:59.486239] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.674 [2024-07-12 11:06:59.486595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.674 [2024-07-12 11:06:59.486611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.674 [2024-07-12 11:06:59.495226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.674 [2024-07-12 11:06:59.495538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.674 [2024-07-12 11:06:59.495554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.674 [2024-07-12 11:06:59.504321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.674 [2024-07-12 11:06:59.504632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.674 [2024-07-12 11:06:59.504648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.674 [2024-07-12 11:06:59.513468] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.674 [2024-07-12 11:06:59.513681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.674 [2024-07-12 11:06:59.513696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.674 [2024-07-12 11:06:59.519458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.674 [2024-07-12 11:06:59.519673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.675 [2024-07-12 11:06:59.519691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.675 [2024-07-12 11:06:59.525837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.675 [2024-07-12 11:06:59.526051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.675 [2024-07-12 11:06:59.526066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.675 [2024-07-12 11:06:59.532129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.675 [2024-07-12 11:06:59.532333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.675 [2024-07-12 11:06:59.532348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.675 [2024-07-12 11:06:59.537533] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.675 [2024-07-12 11:06:59.537833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.675 [2024-07-12 11:06:59.537848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.675 [2024-07-12 11:06:59.543467] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.675 [2024-07-12 11:06:59.543778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.675 [2024-07-12 11:06:59.543794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.675 [2024-07-12 11:06:59.550583] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.675 [2024-07-12 11:06:59.550750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.675 [2024-07-12 11:06:59.550764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.675 [2024-07-12 11:06:59.559986] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.675 [2024-07-12 11:06:59.560292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.675 [2024-07-12 11:06:59.560308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.675 [2024-07-12 11:06:59.566712] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.675 [2024-07-12 11:06:59.566927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.675 [2024-07-12 11:06:59.566942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.675 [2024-07-12 11:06:59.575302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.675 [2024-07-12 11:06:59.575631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.675 [2024-07-12 11:06:59.575647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.675 [2024-07-12 11:06:59.581446] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.675 [2024-07-12 11:06:59.581780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.675 [2024-07-12 11:06:59.581796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.675 [2024-07-12 11:06:59.589537] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.675 [2024-07-12 11:06:59.589857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.675 [2024-07-12 11:06:59.589873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.675 [2024-07-12 11:06:59.596019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.675 [2024-07-12 11:06:59.596311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.675 [2024-07-12 11:06:59.596327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.675 [2024-07-12 11:06:59.602213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.675 [2024-07-12 11:06:59.602416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.675 [2024-07-12 11:06:59.602431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.675 [2024-07-12 11:06:59.610494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.675 [2024-07-12 11:06:59.610826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.675 [2024-07-12 11:06:59.610842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.675 [2024-07-12 11:06:59.620727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.675 [2024-07-12 11:06:59.621062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.675 [2024-07-12 11:06:59.621078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.675 [2024-07-12 11:06:59.629494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.675 [2024-07-12 11:06:59.629698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.675 [2024-07-12 11:06:59.629713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.675 [2024-07-12 11:06:59.637092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.675 [2024-07-12 11:06:59.637474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.675 [2024-07-12 11:06:59.637490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.675 [2024-07-12 11:06:59.643796] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.675 [2024-07-12 11:06:59.644139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.675 [2024-07-12 11:06:59.644154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.675 [2024-07-12 11:06:59.650114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.675 [2024-07-12 11:06:59.650459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.675 [2024-07-12 11:06:59.650475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.675 [2024-07-12 11:06:59.655494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.675 [2024-07-12 11:06:59.655706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.675 [2024-07-12 11:06:59.655721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.936 [2024-07-12 11:06:59.661504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.936 [2024-07-12 11:06:59.661647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.936 [2024-07-12 11:06:59.661662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.936 [2024-07-12 11:06:59.668874] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.936 [2024-07-12 11:06:59.669157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.936 [2024-07-12 11:06:59.669178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.936 [2024-07-12 11:06:59.678613] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.936 [2024-07-12 11:06:59.678907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.936 [2024-07-12 11:06:59.678923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.936 [2024-07-12 11:06:59.689226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.936 [2024-07-12 11:06:59.689550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.936 [2024-07-12 11:06:59.689566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.936 [2024-07-12 11:06:59.696632] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.936 [2024-07-12 11:06:59.696976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.936 [2024-07-12 11:06:59.696992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.936 [2024-07-12 11:06:59.703940] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.936 [2024-07-12 11:06:59.704279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.936 [2024-07-12 11:06:59.704295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.936 [2024-07-12 11:06:59.711659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.936 [2024-07-12 11:06:59.712030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.936 [2024-07-12 11:06:59.712050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.936 [2024-07-12 11:06:59.722147] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.936 [2024-07-12 11:06:59.722468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.936 [2024-07-12 11:06:59.722484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.936 [2024-07-12 11:06:59.731138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.936 [2024-07-12 11:06:59.731479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.936 [2024-07-12 11:06:59.731495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.936 [2024-07-12 11:06:59.741278] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.936 [2024-07-12 11:06:59.741600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.936 [2024-07-12 11:06:59.741616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.936 [2024-07-12 11:06:59.749984] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.936 [2024-07-12 11:06:59.750340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.936 [2024-07-12 11:06:59.750356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.936 [2024-07-12 11:06:59.756386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.936 [2024-07-12 11:06:59.756699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.936 [2024-07-12 11:06:59.756715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.936 [2024-07-12 11:06:59.764403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.937 [2024-07-12 11:06:59.764724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.937 [2024-07-12 11:06:59.764741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.937 [2024-07-12 11:06:59.773995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.937 [2024-07-12 11:06:59.774215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.937 [2024-07-12 11:06:59.774230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.937 [2024-07-12 11:06:59.782851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.937 [2024-07-12 11:06:59.782989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.937 [2024-07-12 11:06:59.783004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.937 [2024-07-12 11:06:59.792233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.937 [2024-07-12 11:06:59.792538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.937 [2024-07-12 11:06:59.792555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.937 [2024-07-12 11:06:59.802590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.937 [2024-07-12 11:06:59.802811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.937 [2024-07-12 11:06:59.802826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.937 [2024-07-12 11:06:59.811481] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.937 [2024-07-12 11:06:59.811794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.937 [2024-07-12 11:06:59.811811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.937 [2024-07-12 11:06:59.821054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.937 [2024-07-12 11:06:59.821352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.937 [2024-07-12 11:06:59.821368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.937 [2024-07-12 11:06:59.829614] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.937 [2024-07-12 11:06:59.829828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.937 [2024-07-12 11:06:59.829843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.937 [2024-07-12 11:06:59.841031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.937 [2024-07-12 11:06:59.841239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.937 [2024-07-12 11:06:59.841255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.937 [2024-07-12 11:06:59.847807] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.937 [2024-07-12 11:06:59.848020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.937 [2024-07-12 11:06:59.848035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.937 [2024-07-12 11:06:59.857087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.937 [2024-07-12 11:06:59.857429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.937 [2024-07-12 11:06:59.857445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.937 [2024-07-12 11:06:59.864017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.937 [2024-07-12 11:06:59.864353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.937 [2024-07-12 11:06:59.864368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.937 [2024-07-12 11:06:59.870736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.937 [2024-07-12 11:06:59.871026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.937 [2024-07-12 11:06:59.871042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.937 [2024-07-12 11:06:59.877129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.937 [2024-07-12 11:06:59.877488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.937 [2024-07-12 11:06:59.877504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.937 [2024-07-12 11:06:59.886092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.937 [2024-07-12 11:06:59.886309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.937 [2024-07-12 11:06:59.886324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.937 [2024-07-12 11:06:59.896264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.937 [2024-07-12 11:06:59.896608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.937 [2024-07-12 11:06:59.896623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.937 [2024-07-12 11:06:59.905234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.937 [2024-07-12 11:06:59.905449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.937 [2024-07-12 11:06:59.905465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.937 [2024-07-12 11:06:59.911181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.937 [2024-07-12 11:06:59.911523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.937 [2024-07-12 11:06:59.911539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.937 [2024-07-12 11:06:59.918014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:42.937 [2024-07-12 11:06:59.918353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.937 [2024-07-12 11:06:59.918369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.199 [2024-07-12 11:06:59.924733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.199 [2024-07-12 11:06:59.925057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.199 [2024-07-12 11:06:59.925073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.199 [2024-07-12 11:06:59.930961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.199 [2024-07-12 11:06:59.931179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.199 [2024-07-12 11:06:59.931197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.199 [2024-07-12 11:06:59.937924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.199 [2024-07-12 11:06:59.938143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.199 [2024-07-12 11:06:59.938159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.199 [2024-07-12 11:06:59.945558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.199 [2024-07-12 11:06:59.945772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.199 [2024-07-12 11:06:59.945788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.199 [2024-07-12 11:06:59.955425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.199 [2024-07-12 11:06:59.955722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.199 [2024-07-12 11:06:59.955738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.199 [2024-07-12 11:06:59.964195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.199 [2024-07-12 11:06:59.964410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.199 [2024-07-12 11:06:59.964424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.199 [2024-07-12 11:06:59.971236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.199 [2024-07-12 11:06:59.971451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.199 [2024-07-12 11:06:59.971466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.199 [2024-07-12 11:06:59.979587] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.199 [2024-07-12 11:06:59.979940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.199 [2024-07-12 11:06:59.979956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.199 [2024-07-12 11:06:59.990296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.199 [2024-07-12 11:06:59.990635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.199 [2024-07-12 11:06:59.990651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.199 [2024-07-12 11:07:00.000761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.199 [2024-07-12 11:07:00.001142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.199 [2024-07-12 11:07:00.001159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.199 [2024-07-12 11:07:00.011454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.199 [2024-07-12 11:07:00.011677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.199 [2024-07-12 11:07:00.011693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.199 [2024-07-12 11:07:00.022061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.199 [2024-07-12 11:07:00.022320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.199 [2024-07-12 11:07:00.022337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.199 [2024-07-12 11:07:00.028592] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.199 [2024-07-12 11:07:00.028667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.199 [2024-07-12 11:07:00.028681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.199 [2024-07-12 11:07:00.035783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.199 [2024-07-12 11:07:00.035989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.199 [2024-07-12 11:07:00.036005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.199 [2024-07-12 11:07:00.044775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.199 [2024-07-12 11:07:00.045031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.199 [2024-07-12 11:07:00.045046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.199 [2024-07-12 11:07:00.055352] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.199 [2024-07-12 11:07:00.055613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.199 [2024-07-12 11:07:00.055628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.199 [2024-07-12 11:07:00.061282] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.199 [2024-07-12 11:07:00.061555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.199 [2024-07-12 11:07:00.061570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.199 [2024-07-12 11:07:00.067339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.199 [2024-07-12 11:07:00.067536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.199 [2024-07-12 11:07:00.067551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.199 [2024-07-12 11:07:00.073032] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.199 [2024-07-12 11:07:00.073231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.199 [2024-07-12 11:07:00.073250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.199 [2024-07-12 11:07:00.078503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.199 [2024-07-12 11:07:00.078755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.199 [2024-07-12 11:07:00.078770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.199 [2024-07-12 11:07:00.083492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.199 [2024-07-12 11:07:00.083690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.199 [2024-07-12 11:07:00.083706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.199 [2024-07-12 11:07:00.089244] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.199 [2024-07-12 11:07:00.089471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.199 [2024-07-12 11:07:00.089487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.200 [2024-07-12 11:07:00.095196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.200 [2024-07-12 11:07:00.095439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.200 [2024-07-12 11:07:00.095455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.200 [2024-07-12 11:07:00.102911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.200 [2024-07-12 11:07:00.103162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.200 [2024-07-12 11:07:00.103177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.200 [2024-07-12 11:07:00.112048] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.200 [2024-07-12 11:07:00.112431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.200 [2024-07-12 11:07:00.112447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.200 [2024-07-12 11:07:00.118476] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.200 [2024-07-12 11:07:00.118694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.200 [2024-07-12 11:07:00.118709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.200 [2024-07-12 11:07:00.124404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.200 [2024-07-12 11:07:00.124762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.200 [2024-07-12 11:07:00.124779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.200 [2024-07-12 11:07:00.131176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.200 [2024-07-12 11:07:00.131379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.200 [2024-07-12 11:07:00.131394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.200 [2024-07-12 11:07:00.138514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.200 [2024-07-12 11:07:00.138708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.200 [2024-07-12 11:07:00.138724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.200 [2024-07-12 11:07:00.145428] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.200 [2024-07-12 11:07:00.145727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.200 [2024-07-12 11:07:00.145742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.200 [2024-07-12 11:07:00.151481] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.200 [2024-07-12 11:07:00.151873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.200 [2024-07-12 11:07:00.151889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.200 [2024-07-12 11:07:00.158573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.200 [2024-07-12 11:07:00.158769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.200 [2024-07-12 11:07:00.158785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.200 [2024-07-12 11:07:00.165631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.200 [2024-07-12 11:07:00.165857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.200 [2024-07-12 11:07:00.165872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.200 [2024-07-12 11:07:00.173422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.200 [2024-07-12 11:07:00.173615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.200 [2024-07-12 11:07:00.173631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.462 [2024-07-12 11:07:00.181992] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.462 [2024-07-12 11:07:00.182242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.462 [2024-07-12 11:07:00.182257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.462 [2024-07-12 11:07:00.190470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.462 [2024-07-12 11:07:00.190663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.462 [2024-07-12 11:07:00.190678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.462 [2024-07-12 11:07:00.198909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.462 [2024-07-12 11:07:00.199103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.462 [2024-07-12 11:07:00.199118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.462 [2024-07-12 11:07:00.206296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.462 [2024-07-12 11:07:00.206496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.462 [2024-07-12 11:07:00.206511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.462 [2024-07-12 11:07:00.212195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.462 [2024-07-12 11:07:00.212446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.462 [2024-07-12 11:07:00.212461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.462 [2024-07-12 11:07:00.218412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.462 [2024-07-12 11:07:00.218770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.462 [2024-07-12 11:07:00.218786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.462 [2024-07-12 11:07:00.225584] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.462 [2024-07-12 11:07:00.225777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.462 [2024-07-12 11:07:00.225792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.462 [2024-07-12 11:07:00.232894] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.462 [2024-07-12 11:07:00.233088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.462 [2024-07-12 11:07:00.233102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.462 [2024-07-12 11:07:00.238552] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.462 [2024-07-12 11:07:00.238744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.462 [2024-07-12 11:07:00.238759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.462 [2024-07-12 11:07:00.244413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.462 [2024-07-12 11:07:00.244620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.462 [2024-07-12 11:07:00.244635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.462 [2024-07-12 11:07:00.249375] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.462 [2024-07-12 11:07:00.249569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.462 [2024-07-12 11:07:00.249587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.462 [2024-07-12 11:07:00.253972] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.462 [2024-07-12 11:07:00.254168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.462 [2024-07-12 11:07:00.254183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.462 [2024-07-12 11:07:00.259061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.462 [2024-07-12 11:07:00.259264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.462 [2024-07-12 11:07:00.259279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.462 [2024-07-12 11:07:00.263630] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.462 [2024-07-12 11:07:00.263866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.462 [2024-07-12 11:07:00.263881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.462 [2024-07-12 11:07:00.269658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.462 [2024-07-12 11:07:00.269907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.462 [2024-07-12 11:07:00.269923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.462 [2024-07-12 11:07:00.277160] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.462 [2024-07-12 11:07:00.277353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.462 [2024-07-12 11:07:00.277368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.462 [2024-07-12 11:07:00.283684] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.462 [2024-07-12 11:07:00.283926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.462 [2024-07-12 11:07:00.283941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.462 [2024-07-12 11:07:00.290238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.462 [2024-07-12 11:07:00.290432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.462 [2024-07-12 11:07:00.290447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.462 [2024-07-12 11:07:00.298077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.462 [2024-07-12 11:07:00.298438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.462 [2024-07-12 11:07:00.298454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.462 [2024-07-12 11:07:00.306410] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.462 [2024-07-12 11:07:00.306672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.462 [2024-07-12 11:07:00.306688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.462 [2024-07-12 11:07:00.313284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.462 [2024-07-12 11:07:00.313490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.462 [2024-07-12 11:07:00.313511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.462 [2024-07-12 11:07:00.319166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.462 [2024-07-12 11:07:00.319561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.462 [2024-07-12 11:07:00.319577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.462 [2024-07-12 11:07:00.326082] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.462 [2024-07-12 11:07:00.326322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.462 [2024-07-12 11:07:00.326337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.462 [2024-07-12 11:07:00.333164] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.462 [2024-07-12 11:07:00.333359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.462 [2024-07-12 11:07:00.333374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.462 [2024-07-12 11:07:00.338579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.462 [2024-07-12 11:07:00.338771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.462 [2024-07-12 11:07:00.338786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.462 [2024-07-12 11:07:00.344695] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.462 [2024-07-12 11:07:00.344908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.462 [2024-07-12 11:07:00.344922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.462 [2024-07-12 11:07:00.351869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.462 [2024-07-12 11:07:00.352061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.462 [2024-07-12 11:07:00.352076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.462 [2024-07-12 11:07:00.359452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.462 [2024-07-12 11:07:00.359783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.463 [2024-07-12 11:07:00.359800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.463 [2024-07-12 11:07:00.367191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.463 [2024-07-12 11:07:00.367384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.463 [2024-07-12 11:07:00.367400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.463 [2024-07-12 11:07:00.372187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.463 [2024-07-12 11:07:00.372416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.463 [2024-07-12 11:07:00.372430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.463 [2024-07-12 11:07:00.377511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.463 [2024-07-12 11:07:00.377731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.463 [2024-07-12 11:07:00.377746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.463 [2024-07-12 11:07:00.383409] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.463 [2024-07-12 11:07:00.383614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.463 [2024-07-12 11:07:00.383631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.463 [2024-07-12 11:07:00.388017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.463 [2024-07-12 11:07:00.388216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.463 [2024-07-12 11:07:00.388232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.463 [2024-07-12 11:07:00.393670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.463 [2024-07-12 11:07:00.393864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.463 [2024-07-12 11:07:00.393879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.463 [2024-07-12 11:07:00.398568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.463 [2024-07-12 11:07:00.398788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.463 [2024-07-12 11:07:00.398804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.463 [2024-07-12 11:07:00.404885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.463 [2024-07-12 11:07:00.405188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.463 [2024-07-12 11:07:00.405204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.463 [2024-07-12 11:07:00.410601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.463 [2024-07-12 11:07:00.410817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.463 [2024-07-12 11:07:00.410836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.463 [2024-07-12 11:07:00.416497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.463 [2024-07-12 11:07:00.416696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.463 [2024-07-12 11:07:00.416711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.463 [2024-07-12 11:07:00.422454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.463 [2024-07-12 11:07:00.422737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.463 [2024-07-12 11:07:00.422753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.463 [2024-07-12 11:07:00.427797] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.463 [2024-07-12 11:07:00.427991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.463 [2024-07-12 11:07:00.428007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.463 [2024-07-12 11:07:00.434031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.463 [2024-07-12 11:07:00.434451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.463 [2024-07-12 11:07:00.434467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.463 [2024-07-12 11:07:00.439665] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.463 [2024-07-12 11:07:00.439859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.463 [2024-07-12 11:07:00.439875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.723 [2024-07-12 11:07:00.444574] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.723 [2024-07-12 11:07:00.444767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.723 [2024-07-12 11:07:00.444783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.723 [2024-07-12 11:07:00.451321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.723 [2024-07-12 11:07:00.451629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.723 [2024-07-12 11:07:00.451644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.723 [2024-07-12 11:07:00.460467] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.723 [2024-07-12 11:07:00.460677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.723 [2024-07-12 11:07:00.460692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.723 [2024-07-12 11:07:00.469735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.723 [2024-07-12 11:07:00.469936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.723 [2024-07-12 11:07:00.469951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.723 [2024-07-12 11:07:00.477140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.723 [2024-07-12 11:07:00.477335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.723 [2024-07-12 11:07:00.477350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.723 [2024-07-12 11:07:00.486999] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.724 [2024-07-12 11:07:00.487352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.724 [2024-07-12 11:07:00.487369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.724 [2024-07-12 11:07:00.496750] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.724 [2024-07-12 11:07:00.497167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.724 [2024-07-12 11:07:00.497183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.724 [2024-07-12 11:07:00.507557] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.724 [2024-07-12 11:07:00.507804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.724 [2024-07-12 11:07:00.507819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.724 [2024-07-12 11:07:00.517820] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.724 [2024-07-12 11:07:00.518155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.724 [2024-07-12 11:07:00.518170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.724 [2024-07-12 11:07:00.528133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.724 [2024-07-12 11:07:00.528502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.724 [2024-07-12 11:07:00.528518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.724 [2024-07-12 11:07:00.538511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.724 [2024-07-12 11:07:00.538901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.724 [2024-07-12 11:07:00.538917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.724 [2024-07-12 11:07:00.548200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.724 [2024-07-12 11:07:00.548514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.724 [2024-07-12 11:07:00.548530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.724 [2024-07-12 11:07:00.557853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.724 [2024-07-12 11:07:00.558275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.724 [2024-07-12 11:07:00.558291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.724 [2024-07-12 11:07:00.566899] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.724 [2024-07-12 11:07:00.567094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.724 [2024-07-12 11:07:00.567109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.724 [2024-07-12 11:07:00.575975] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.724 [2024-07-12 11:07:00.576175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.724 [2024-07-12 11:07:00.576190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.724 [2024-07-12 11:07:00.585071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.724 [2024-07-12 11:07:00.585436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.724 [2024-07-12 11:07:00.585453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.724 [2024-07-12 11:07:00.591588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.724 [2024-07-12 11:07:00.591855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.724 [2024-07-12 11:07:00.591871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.724 [2024-07-12 11:07:00.597848] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.724 [2024-07-12 11:07:00.598094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.724 [2024-07-12 11:07:00.598117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.724 [2024-07-12 11:07:00.604027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.724 [2024-07-12 11:07:00.604224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.724 [2024-07-12 11:07:00.604240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.724 [2024-07-12 11:07:00.610680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.724 [2024-07-12 11:07:00.611027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.724 [2024-07-12 11:07:00.611042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.724 [2024-07-12 11:07:00.620161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.724 [2024-07-12 11:07:00.620356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.724 [2024-07-12 11:07:00.620373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.724 [2024-07-12 11:07:00.627905] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.724 [2024-07-12 11:07:00.628244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.724 [2024-07-12 11:07:00.628260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.724 [2024-07-12 11:07:00.636222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.724 [2024-07-12 11:07:00.636438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.724 [2024-07-12 11:07:00.636455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.724 [2024-07-12 11:07:00.645907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.724 [2024-07-12 11:07:00.646417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.724 [2024-07-12 11:07:00.646433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.724 [2024-07-12 11:07:00.652366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.724 [2024-07-12 11:07:00.652593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.724 [2024-07-12 11:07:00.652608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.724 [2024-07-12 11:07:00.659305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.724 [2024-07-12 11:07:00.659536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.724 [2024-07-12 11:07:00.659552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.724 [2024-07-12 11:07:00.667570] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.724 [2024-07-12 11:07:00.667808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.724 [2024-07-12 11:07:00.667823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.724 [2024-07-12 11:07:00.674934] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.724 [2024-07-12 11:07:00.675368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.724 [2024-07-12 11:07:00.675384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.724 [2024-07-12 11:07:00.685961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.724 [2024-07-12 11:07:00.686213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.724 [2024-07-12 11:07:00.686228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.724 [2024-07-12 11:07:00.694779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.724 [2024-07-12 11:07:00.695129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.724 [2024-07-12 11:07:00.695145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.724 [2024-07-12 11:07:00.703744] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.724 [2024-07-12 11:07:00.704129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.724 [2024-07-12 11:07:00.704145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.985 [2024-07-12 11:07:00.713112] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.985 [2024-07-12 11:07:00.713634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.985 [2024-07-12 11:07:00.713650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.985 [2024-07-12 11:07:00.721440] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.985 [2024-07-12 11:07:00.721750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.985 [2024-07-12 11:07:00.721766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.985 [2024-07-12 11:07:00.733640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.985 [2024-07-12 11:07:00.734014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.985 [2024-07-12 11:07:00.734030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.985 [2024-07-12 11:07:00.742458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.985 [2024-07-12 11:07:00.742811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.985 [2024-07-12 11:07:00.742827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.985 [2024-07-12 11:07:00.750953] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.985 [2024-07-12 11:07:00.751174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.985 [2024-07-12 11:07:00.751189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.985 [2024-07-12 11:07:00.758329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.985 [2024-07-12 11:07:00.758645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.985 [2024-07-12 11:07:00.758660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.985 [2024-07-12 11:07:00.768068] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.985 [2024-07-12 11:07:00.768456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.985 [2024-07-12 11:07:00.768476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.985 [2024-07-12 11:07:00.778216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.985 [2024-07-12 11:07:00.778492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.985 [2024-07-12 11:07:00.778507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.985 [2024-07-12 11:07:00.787596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.985 [2024-07-12 11:07:00.787922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.985 [2024-07-12 11:07:00.787938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.985 [2024-07-12 11:07:00.798396] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.985 [2024-07-12 11:07:00.798807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.985 [2024-07-12 11:07:00.798823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.985 [2024-07-12 11:07:00.809022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.985 [2024-07-12 11:07:00.809354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.985 [2024-07-12 11:07:00.809370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.985 [2024-07-12 11:07:00.817044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.985 [2024-07-12 11:07:00.817324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.985 [2024-07-12 11:07:00.817340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.985 [2024-07-12 11:07:00.821875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.985 [2024-07-12 11:07:00.822173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.985 [2024-07-12 11:07:00.822189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.985 [2024-07-12 11:07:00.831155] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.985 [2024-07-12 11:07:00.831400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.985 [2024-07-12 11:07:00.831417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.985 [2024-07-12 11:07:00.836247] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.985 [2024-07-12 11:07:00.836597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.985 [2024-07-12 11:07:00.836613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.985 [2024-07-12 11:07:00.841683] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.985 [2024-07-12 11:07:00.841884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.985 [2024-07-12 11:07:00.841900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.985 [2024-07-12 11:07:00.849049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.985 [2024-07-12 11:07:00.849246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.985 [2024-07-12 11:07:00.849261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.985 [2024-07-12 11:07:00.855022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.985 [2024-07-12 11:07:00.855294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.985 [2024-07-12 11:07:00.855311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.985 [2024-07-12 11:07:00.862322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.985 [2024-07-12 11:07:00.862552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.985 [2024-07-12 11:07:00.862567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.985 [2024-07-12 11:07:00.871132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.985 [2024-07-12 11:07:00.871485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.985 [2024-07-12 11:07:00.871501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.985 [2024-07-12 11:07:00.879271] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.985 [2024-07-12 11:07:00.879468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.985 [2024-07-12 11:07:00.879485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.985 [2024-07-12 11:07:00.886560] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.985 [2024-07-12 11:07:00.886753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.985 [2024-07-12 11:07:00.886770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.985 [2024-07-12 11:07:00.891952] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.985 [2024-07-12 11:07:00.892167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.985 [2024-07-12 11:07:00.892184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.985 [2024-07-12 11:07:00.897777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.985 [2024-07-12 11:07:00.898039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.985 [2024-07-12 11:07:00.898056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.985 [2024-07-12 11:07:00.904226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.986 [2024-07-12 11:07:00.904478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.986 [2024-07-12 11:07:00.904493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.986 [2024-07-12 11:07:00.909525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.986 [2024-07-12 11:07:00.909809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.986 [2024-07-12 11:07:00.909825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.986 [2024-07-12 11:07:00.915292] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.986 [2024-07-12 11:07:00.915593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.986 [2024-07-12 11:07:00.915609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.986 [2024-07-12 11:07:00.920495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.986 [2024-07-12 11:07:00.920745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.986 [2024-07-12 11:07:00.920761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.986 [2024-07-12 11:07:00.925604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.986 [2024-07-12 11:07:00.925798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.986 [2024-07-12 11:07:00.925814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.986 [2024-07-12 11:07:00.933215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.986 [2024-07-12 11:07:00.933415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.986 [2024-07-12 11:07:00.933430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.986 [2024-07-12 11:07:00.939594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.986 [2024-07-12 11:07:00.939853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.986 [2024-07-12 11:07:00.939869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.986 [2024-07-12 11:07:00.947571] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.986 [2024-07-12 11:07:00.947927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.986 [2024-07-12 11:07:00.947944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.986 [2024-07-12 11:07:00.955118] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.986 [2024-07-12 11:07:00.955593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.986 [2024-07-12 11:07:00.955614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.986 [2024-07-12 11:07:00.964491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:43.986 [2024-07-12 11:07:00.964776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.986 [2024-07-12 11:07:00.964791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.246 [2024-07-12 11:07:00.971770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:44.246 [2024-07-12 11:07:00.972044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.246 [2024-07-12 11:07:00.972060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.246 [2024-07-12 11:07:00.977873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:44.246 [2024-07-12 11:07:00.978098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.246 [2024-07-12 11:07:00.978113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.246 [2024-07-12 11:07:00.984023] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:44.246 [2024-07-12 11:07:00.984248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.246 [2024-07-12 11:07:00.984264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.246 [2024-07-12 11:07:00.994310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:44.246 [2024-07-12 11:07:00.994558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.246 [2024-07-12 11:07:00.994574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.246 [2024-07-12 11:07:01.005017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:44.246 [2024-07-12 11:07:01.005245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.246 [2024-07-12 11:07:01.005261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.246 [2024-07-12 11:07:01.014192] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:44.246 [2024-07-12 11:07:01.014643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.246 [2024-07-12 11:07:01.014659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.246 [2024-07-12 11:07:01.024770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:44.246 [2024-07-12 11:07:01.025083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.246 [2024-07-12 11:07:01.025098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.246 [2024-07-12 11:07:01.035753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:44.246 [2024-07-12 11:07:01.036074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.246 [2024-07-12 11:07:01.036089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.246 [2024-07-12 11:07:01.045882] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:44.246 [2024-07-12 11:07:01.046056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.246 [2024-07-12 11:07:01.046071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.246 [2024-07-12 11:07:01.053981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:44.246 [2024-07-12 11:07:01.054154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.246 [2024-07-12 11:07:01.054169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.246 [2024-07-12 11:07:01.060934] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:44.246 [2024-07-12 11:07:01.061086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.246 [2024-07-12 11:07:01.061101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.246 [2024-07-12 11:07:01.068186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:44.246 [2024-07-12 11:07:01.068528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.246 [2024-07-12 11:07:01.068544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.246 [2024-07-12 11:07:01.077180] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:44.246 [2024-07-12 11:07:01.077359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.246 [2024-07-12 11:07:01.077383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.246 [2024-07-12 11:07:01.083710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:44.246 [2024-07-12 11:07:01.083901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.246 [2024-07-12 11:07:01.083916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.246 [2024-07-12 11:07:01.089034] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:44.246 [2024-07-12 11:07:01.089252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.246 [2024-07-12 11:07:01.089267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.246 [2024-07-12 11:07:01.095958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:44.246 [2024-07-12 11:07:01.096187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.246 [2024-07-12 11:07:01.096202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.246 [2024-07-12 11:07:01.102864] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:44.246 [2024-07-12 11:07:01.103119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.246 [2024-07-12 11:07:01.103148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.246 [2024-07-12 11:07:01.109020] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:44.246 [2024-07-12 11:07:01.109229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.246 [2024-07-12 11:07:01.109245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.246 [2024-07-12 11:07:01.114145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:44.246 [2024-07-12 11:07:01.114489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.246 [2024-07-12 11:07:01.114506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.246 [2024-07-12 11:07:01.120582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:44.246 [2024-07-12 11:07:01.120827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.246 [2024-07-12 11:07:01.120842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.246 [2024-07-12 11:07:01.130151] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1225c80) with pdu=0x2000190fef90 00:28:44.246 [2024-07-12 11:07:01.130264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.246 [2024-07-12 11:07:01.130279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.246 00:28:44.246 Latency(us) 00:28:44.246 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:44.246 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:44.246 nvme0n1 : 2.01 4004.02 500.50 0.00 0.00 3987.82 2007.04 16056.32 00:28:44.246 =================================================================================================================== 00:28:44.246 Total : 4004.02 500.50 0.00 0.00 3987.82 2007.04 16056.32 00:28:44.246 0 00:28:44.246 11:07:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:44.246 11:07:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:44.246 11:07:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:44.246 | .driver_specific 00:28:44.246 | .nvme_error 00:28:44.246 | .status_code 00:28:44.246 | .command_transient_transport_error' 00:28:44.246 11:07:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:44.517 11:07:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 259 > 0 )) 00:28:44.517 11:07:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2272010 00:28:44.517 11:07:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2272010 ']' 00:28:44.517 11:07:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2272010 00:28:44.517 11:07:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:44.517 11:07:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:44.517 11:07:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2272010 00:28:44.518 11:07:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:44.518 11:07:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:44.518 11:07:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2272010' 00:28:44.518 killing process with pid 2272010 00:28:44.518 11:07:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2272010 00:28:44.518 Received shutdown signal, test time was about 2.000000 seconds 00:28:44.518 00:28:44.518 Latency(us) 00:28:44.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:44.518 =================================================================================================================== 00:28:44.518 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:44.518 11:07:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2272010 00:28:44.518 11:07:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2269611 00:28:44.518 11:07:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2269611 ']' 00:28:44.518 11:07:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2269611 00:28:44.518 11:07:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:44.518 11:07:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:44.518 11:07:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2269611 00:28:44.784 11:07:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:44.784 11:07:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:44.784 11:07:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2269611' 00:28:44.784 killing process with pid 2269611 00:28:44.784 11:07:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2269611 00:28:44.784 11:07:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2269611 00:28:44.784 00:28:44.784 real 0m16.276s 00:28:44.784 user 0m32.096s 00:28:44.784 sys 0m3.276s 00:28:44.784 11:07:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:44.784 11:07:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:44.784 ************************************ 00:28:44.784 END TEST nvmf_digest_error 00:28:44.784 ************************************ 00:28:44.784 11:07:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:28:44.784 11:07:01 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:44.784 11:07:01 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:44.784 11:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:44.784 11:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:28:44.784 11:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:44.784 11:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:28:44.784 11:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:44.784 11:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:44.784 rmmod nvme_tcp 00:28:44.784 rmmod nvme_fabrics 00:28:44.784 rmmod nvme_keyring 00:28:44.784 11:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:44.784 11:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:28:44.784 11:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:28:44.784 11:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2269611 ']' 00:28:44.784 11:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2269611 00:28:44.784 11:07:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 2269611 ']' 00:28:44.784 11:07:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 2269611 00:28:44.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2269611) - No such process 00:28:44.784 11:07:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 2269611 is not found' 00:28:44.784 Process with pid 2269611 is not found 00:28:44.784 11:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:44.784 11:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:44.784 11:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:44.784 11:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:44.784 11:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:44.784 11:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.784 11:07:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:44.784 11:07:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:47.327 11:07:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:47.327 00:28:47.327 real 0m42.686s 00:28:47.327 user 1m6.643s 00:28:47.327 sys 0m12.457s 00:28:47.327 11:07:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:47.327 11:07:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:47.327 ************************************ 00:28:47.327 END TEST nvmf_digest 00:28:47.327 ************************************ 00:28:47.327 11:07:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:47.327 11:07:03 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:28:47.327 11:07:03 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:28:47.327 11:07:03 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:28:47.327 11:07:03 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:47.327 11:07:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:47.327 11:07:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:47.327 11:07:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:47.327 ************************************ 00:28:47.327 START TEST nvmf_bdevperf 00:28:47.327 ************************************ 00:28:47.327 11:07:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:47.327 * Looking for test storage... 00:28:47.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:47.327 11:07:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:55.504 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:55.504 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:55.504 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:55.504 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:55.504 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:55.504 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:55.504 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:55.504 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:55.505 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:55.505 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:55.505 11:07:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:55.505 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:55.505 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:55.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:55.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.455 ms 00:28:55.505 00:28:55.505 --- 10.0.0.2 ping statistics --- 00:28:55.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.505 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:55.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:55.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.347 ms 00:28:55.505 00:28:55.505 --- 10.0.0.1 ping statistics --- 00:28:55.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.505 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2276984 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2276984 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2276984 ']' 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:55.505 11:07:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:55.505 [2024-07-12 11:07:11.442090] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:55.505 [2024-07-12 11:07:11.442173] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:55.505 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.505 [2024-07-12 11:07:11.532402] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:55.505 [2024-07-12 11:07:11.626587] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:55.505 [2024-07-12 11:07:11.626647] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:55.505 [2024-07-12 11:07:11.626655] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:55.505 [2024-07-12 11:07:11.626662] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:55.506 [2024-07-12 11:07:11.626669] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:55.506 [2024-07-12 11:07:11.626834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:55.506 [2024-07-12 11:07:11.626978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:55.506 [2024-07-12 11:07:11.626978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:55.506 11:07:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:55.506 11:07:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:28:55.506 11:07:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:55.506 11:07:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:55.506 11:07:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:55.506 11:07:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:55.506 11:07:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:55.506 11:07:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.506 11:07:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:55.506 [2024-07-12 11:07:12.291565] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:55.506 11:07:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.506 11:07:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:55.506 11:07:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.506 11:07:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:55.506 Malloc0 00:28:55.506 11:07:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.506 11:07:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:55.506 11:07:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.506 11:07:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:55.506 11:07:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.506 11:07:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:55.506 11:07:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.506 11:07:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:55.506 11:07:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.506 11:07:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:55.506 11:07:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.506 11:07:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:55.506 [2024-07-12 11:07:12.362602] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:55.506 11:07:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.506 11:07:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:55.506 11:07:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:55.506 11:07:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:55.506 11:07:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:55.506 11:07:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:55.506 11:07:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:55.506 { 00:28:55.506 "params": { 00:28:55.506 "name": "Nvme$subsystem", 00:28:55.506 "trtype": "$TEST_TRANSPORT", 00:28:55.506 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.506 "adrfam": "ipv4", 00:28:55.506 "trsvcid": "$NVMF_PORT", 00:28:55.506 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.506 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.506 "hdgst": ${hdgst:-false}, 00:28:55.506 "ddgst": ${ddgst:-false} 00:28:55.506 }, 00:28:55.506 "method": "bdev_nvme_attach_controller" 00:28:55.506 } 00:28:55.506 EOF 00:28:55.506 )") 00:28:55.506 11:07:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:55.506 11:07:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:55.506 11:07:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:55.506 11:07:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:55.506 "params": { 00:28:55.506 "name": "Nvme1", 00:28:55.506 "trtype": "tcp", 00:28:55.506 "traddr": "10.0.0.2", 00:28:55.506 "adrfam": "ipv4", 00:28:55.506 "trsvcid": "4420", 00:28:55.506 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:55.506 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:55.506 "hdgst": false, 00:28:55.506 "ddgst": false 00:28:55.506 }, 00:28:55.506 "method": "bdev_nvme_attach_controller" 00:28:55.506 }' 00:28:55.506 [2024-07-12 11:07:12.420881] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:55.506 [2024-07-12 11:07:12.420945] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2277058 ] 00:28:55.506 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.767 [2024-07-12 11:07:12.502629] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.767 [2024-07-12 11:07:12.598831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.028 Running I/O for 1 seconds... 00:28:56.971 00:28:56.971 Latency(us) 00:28:56.971 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.971 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:56.971 Verification LBA range: start 0x0 length 0x4000 00:28:56.971 Nvme1n1 : 1.01 8713.78 34.04 0.00 0.00 14616.78 3072.00 15619.41 00:28:56.971 =================================================================================================================== 00:28:56.971 Total : 8713.78 34.04 0.00 0.00 14616.78 3072.00 15619.41 00:28:56.971 11:07:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2277395 00:28:56.971 11:07:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:56.971 11:07:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:56.971 11:07:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:56.971 11:07:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:56.971 11:07:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:56.971 11:07:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:56.971 11:07:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:56.971 { 00:28:56.971 "params": { 00:28:56.971 "name": "Nvme$subsystem", 00:28:56.971 "trtype": "$TEST_TRANSPORT", 00:28:56.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:56.971 "adrfam": "ipv4", 00:28:56.971 "trsvcid": "$NVMF_PORT", 00:28:56.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:56.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:56.971 "hdgst": ${hdgst:-false}, 00:28:56.971 "ddgst": ${ddgst:-false} 00:28:56.971 }, 00:28:56.971 "method": "bdev_nvme_attach_controller" 00:28:56.971 } 00:28:56.971 EOF 00:28:56.971 )") 00:28:56.971 11:07:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:56.971 11:07:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:56.971 11:07:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:56.972 11:07:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:56.972 "params": { 00:28:56.972 "name": "Nvme1", 00:28:56.972 "trtype": "tcp", 00:28:56.972 "traddr": "10.0.0.2", 00:28:56.972 "adrfam": "ipv4", 00:28:56.972 "trsvcid": "4420", 00:28:56.972 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:56.972 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:56.972 "hdgst": false, 00:28:56.972 "ddgst": false 00:28:56.972 }, 00:28:56.972 "method": "bdev_nvme_attach_controller" 00:28:56.972 }' 00:28:57.233 [2024-07-12 11:07:13.958825] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:57.233 [2024-07-12 11:07:13.958881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2277395 ] 00:28:57.233 EAL: No free 2048 kB hugepages reported on node 1 00:28:57.233 [2024-07-12 11:07:14.036426] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.233 [2024-07-12 11:07:14.100158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:57.494 Running I/O for 15 seconds... 00:29:00.042 11:07:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2276984 00:29:00.042 11:07:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:00.042 [2024-07-12 11:07:16.925250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:106112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.042 [2024-07-12 11:07:16.925292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.042 [2024-07-12 11:07:16.925314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:106120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.042 [2024-07-12 11:07:16.925323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.042 [2024-07-12 11:07:16.925337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:106128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.042 [2024-07-12 11:07:16.925345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.042 [2024-07-12 11:07:16.925356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:106136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.042 [2024-07-12 11:07:16.925365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.042 [2024-07-12 11:07:16.925377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:106144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.042 [2024-07-12 11:07:16.925387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.042 [2024-07-12 11:07:16.925399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:106152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.042 [2024-07-12 11:07:16.925408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.042 [2024-07-12 11:07:16.925418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:106160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.042 [2024-07-12 11:07:16.925425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.042 [2024-07-12 11:07:16.925435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:106168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.042 [2024-07-12 11:07:16.925442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.042 [2024-07-12 11:07:16.925452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:106176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.042 [2024-07-12 11:07:16.925459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.042 [2024-07-12 11:07:16.925468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:106184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.043 [2024-07-12 11:07:16.925480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.043 [2024-07-12 11:07:16.925490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:106192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.043 [2024-07-12 11:07:16.925497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.043 [2024-07-12 11:07:16.925506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.043 [2024-07-12 11:07:16.925514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.043 [2024-07-12 11:07:16.925523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:106208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.043 [2024-07-12 11:07:16.925530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.043 [2024-07-12 11:07:16.925540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:106216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.043 [2024-07-12 11:07:16.925547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.043 [2024-07-12 11:07:16.925557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:106224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.043 [2024-07-12 11:07:16.925564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.043 [2024-07-12 11:07:16.925573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:106232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.043 [2024-07-12 11:07:16.925580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.043 [2024-07-12 11:07:16.925589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.043 [2024-07-12 11:07:16.925597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.043 [2024-07-12 11:07:16.925607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:106248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.043 [2024-07-12 11:07:16.925614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.043 [2024-07-12 11:07:16.925623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:106256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.043 [2024-07-12 11:07:16.925630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.043 [2024-07-12 11:07:16.925640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:106264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.043 [2024-07-12 11:07:16.925647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.043 [2024-07-12 11:07:16.925657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:106272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.043 [2024-07-12 11:07:16.925665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.043 [2024-07-12 11:07:16.925674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:106280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.043 [2024-07-12 11:07:16.925682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.043 [2024-07-12 11:07:16.925691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:106288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.043 [2024-07-12 11:07:16.925700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.043 [2024-07-12 11:07:16.925710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:106296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.043 [2024-07-12 11:07:16.925717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.043 [2024-07-12 11:07:16.925726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:106304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.043 [2024-07-12 11:07:16.925733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.043 [2024-07-12 11:07:16.925742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:106312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.043 [2024-07-12 11:07:16.925749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.043 [2024-07-12 11:07:16.925758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:106320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.043 [2024-07-12 11:07:16.925765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.043 [2024-07-12 11:07:16.925774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:106328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.043 [2024-07-12 11:07:16.925781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.043 [2024-07-12 11:07:16.925790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:106336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.043 [2024-07-12 11:07:16.925797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.043 [2024-07-12 11:07:16.925806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:106344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.043 [2024-07-12 11:07:16.925814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.043 [2024-07-12 11:07:16.925823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:106352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.043 [2024-07-12 11:07:16.925830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.043 [2024-07-12 11:07:16.925839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:106360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.043 [2024-07-12 11:07:16.925846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.043 [2024-07-12 11:07:16.925855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:106368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.043 [2024-07-12 11:07:16.925862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.043 [2024-07-12 11:07:16.925872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:106376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.043 [2024-07-12 11:07:16.925879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.043 [2024-07-12 11:07:16.925888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:106384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.043 [2024-07-12 11:07:16.925895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.043 [2024-07-12 11:07:16.925906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:106392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.043 [2024-07-12 11:07:16.925913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.043 [2024-07-12 11:07:16.925922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:106400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.043 [2024-07-12 11:07:16.925929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.043 [2024-07-12 11:07:16.925938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:106408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.043 [2024-07-12 11:07:16.925945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.043 [2024-07-12 11:07:16.925955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:106416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.043 [2024-07-12 11:07:16.925961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.043 [2024-07-12 11:07:16.925971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:106424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.043 [2024-07-12 11:07:16.925978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.044 [2024-07-12 11:07:16.925987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:106432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.044 [2024-07-12 11:07:16.925994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.044 [2024-07-12 11:07:16.926003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:106440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.044 [2024-07-12 11:07:16.926010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.044 [2024-07-12 11:07:16.926019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:106448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.044 [2024-07-12 11:07:16.926027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.044 [2024-07-12 11:07:16.926035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:106456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.044 [2024-07-12 11:07:16.926043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.044 [2024-07-12 11:07:16.926051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:106464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.044 [2024-07-12 11:07:16.926058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.044 [2024-07-12 11:07:16.926067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:106472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.044 [2024-07-12 11:07:16.926074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.044 [2024-07-12 11:07:16.926083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:106480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.044 [2024-07-12 11:07:16.926091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.044 [2024-07-12 11:07:16.926100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:106488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.044 [2024-07-12 11:07:16.926108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.044 [2024-07-12 11:07:16.926117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:106496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.044 [2024-07-12 11:07:16.926128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.044 [2024-07-12 11:07:16.926138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:106504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.044 [2024-07-12 11:07:16.926145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.044 [2024-07-12 11:07:16.926154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:106512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.044 [2024-07-12 11:07:16.926161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.044 [2024-07-12 11:07:16.926170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:106520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.044 [2024-07-12 11:07:16.926178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.044 [2024-07-12 11:07:16.926187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:106528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.044 [2024-07-12 11:07:16.926194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.044 [2024-07-12 11:07:16.926203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:106536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.044 [2024-07-12 11:07:16.926210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.044 [2024-07-12 11:07:16.926219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:106544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.044 [2024-07-12 11:07:16.926226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.044 [2024-07-12 11:07:16.926235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:106552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.044 [2024-07-12 11:07:16.926242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.044 [2024-07-12 11:07:16.926252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:106560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.044 [2024-07-12 11:07:16.926259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.044 [2024-07-12 11:07:16.926268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:106568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.044 [2024-07-12 11:07:16.926275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.044 [2024-07-12 11:07:16.926285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:106576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.044 [2024-07-12 11:07:16.926292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.044 [2024-07-12 11:07:16.926301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:106584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.044 [2024-07-12 11:07:16.926308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.044 [2024-07-12 11:07:16.926318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:106592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.044 [2024-07-12 11:07:16.926326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.044 [2024-07-12 11:07:16.926335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:106600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.044 [2024-07-12 11:07:16.926342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.044 [2024-07-12 11:07:16.926351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:106608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.044 [2024-07-12 11:07:16.926358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.044 [2024-07-12 11:07:16.926367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:106616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.044 [2024-07-12 11:07:16.926374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.044 [2024-07-12 11:07:16.926383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:106624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.044 [2024-07-12 11:07:16.926390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.044 [2024-07-12 11:07:16.926400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:105936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.044 [2024-07-12 11:07:16.926408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.044 [2024-07-12 11:07:16.926417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:105944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.044 [2024-07-12 11:07:16.926424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.044 [2024-07-12 11:07:16.926433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:105952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.044 [2024-07-12 11:07:16.926441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.044 [2024-07-12 11:07:16.926450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:105960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.044 [2024-07-12 11:07:16.926457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.044 [2024-07-12 11:07:16.926466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:105968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.044 [2024-07-12 11:07:16.926473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.045 [2024-07-12 11:07:16.926483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:105976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.045 [2024-07-12 11:07:16.926490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.045 [2024-07-12 11:07:16.926499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:105984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.045 [2024-07-12 11:07:16.926507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.045 [2024-07-12 11:07:16.926516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:106632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.045 [2024-07-12 11:07:16.926524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.045 [2024-07-12 11:07:16.926534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:106640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.045 [2024-07-12 11:07:16.926542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.045 [2024-07-12 11:07:16.926551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:106648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.045 [2024-07-12 11:07:16.926558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.045 [2024-07-12 11:07:16.926567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:106656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.045 [2024-07-12 11:07:16.926574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.045 [2024-07-12 11:07:16.926583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:106664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.045 [2024-07-12 11:07:16.926591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.045 [2024-07-12 11:07:16.926600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:106672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.045 [2024-07-12 11:07:16.926607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.045 [2024-07-12 11:07:16.926616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:106680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.045 [2024-07-12 11:07:16.926623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.045 [2024-07-12 11:07:16.926632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:106688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.045 [2024-07-12 11:07:16.926639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.045 [2024-07-12 11:07:16.926648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:106696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.045 [2024-07-12 11:07:16.926655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.045 [2024-07-12 11:07:16.926664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:106704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.045 [2024-07-12 11:07:16.926671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.045 [2024-07-12 11:07:16.926680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:106712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.045 [2024-07-12 11:07:16.926687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.045 [2024-07-12 11:07:16.926696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:106720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.045 [2024-07-12 11:07:16.926703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.045 [2024-07-12 11:07:16.926712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:106728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.045 [2024-07-12 11:07:16.926719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.045 [2024-07-12 11:07:16.926730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:106736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.045 [2024-07-12 11:07:16.926737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.045 [2024-07-12 11:07:16.926747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:106744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.045 [2024-07-12 11:07:16.926754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.045 [2024-07-12 11:07:16.926763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:106752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.045 [2024-07-12 11:07:16.926770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.045 [2024-07-12 11:07:16.926779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:106760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.045 [2024-07-12 11:07:16.926786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.045 [2024-07-12 11:07:16.926795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:106768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.045 [2024-07-12 11:07:16.926802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.045 [2024-07-12 11:07:16.926811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:106776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.045 [2024-07-12 11:07:16.926818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.045 [2024-07-12 11:07:16.926827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:106784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.045 [2024-07-12 11:07:16.926834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.045 [2024-07-12 11:07:16.926843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:106792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.045 [2024-07-12 11:07:16.926851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.045 [2024-07-12 11:07:16.926860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:106800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.045 [2024-07-12 11:07:16.926867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.045 [2024-07-12 11:07:16.926878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:106808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.045 [2024-07-12 11:07:16.926886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.045 [2024-07-12 11:07:16.926895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:106816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.045 [2024-07-12 11:07:16.926902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.045 [2024-07-12 11:07:16.926911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:106824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.045 [2024-07-12 11:07:16.926918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.045 [2024-07-12 11:07:16.926928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:106832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.045 [2024-07-12 11:07:16.926936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.045 [2024-07-12 11:07:16.926946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:106840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.045 [2024-07-12 11:07:16.926953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.045 [2024-07-12 11:07:16.926963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:106848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.045 [2024-07-12 11:07:16.926970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.045 [2024-07-12 11:07:16.926979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:106856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.046 [2024-07-12 11:07:16.926986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.046 [2024-07-12 11:07:16.926996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.046 [2024-07-12 11:07:16.927003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.046 [2024-07-12 11:07:16.927012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.046 [2024-07-12 11:07:16.927019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.046 [2024-07-12 11:07:16.927028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:106880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.046 [2024-07-12 11:07:16.927035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.046 [2024-07-12 11:07:16.927045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:106888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.046 [2024-07-12 11:07:16.927052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.046 [2024-07-12 11:07:16.927061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:106896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.046 [2024-07-12 11:07:16.927069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.046 [2024-07-12 11:07:16.927078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:106904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.046 [2024-07-12 11:07:16.927085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.046 [2024-07-12 11:07:16.927094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:106912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.046 [2024-07-12 11:07:16.927101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.046 [2024-07-12 11:07:16.927110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:106920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.046 [2024-07-12 11:07:16.927118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.046 [2024-07-12 11:07:16.927130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.046 [2024-07-12 11:07:16.927137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.046 [2024-07-12 11:07:16.927147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:106936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.046 [2024-07-12 11:07:16.927156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.046 [2024-07-12 11:07:16.927165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:105992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.046 [2024-07-12 11:07:16.927172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.046 [2024-07-12 11:07:16.927182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:106000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.046 [2024-07-12 11:07:16.927189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.046 [2024-07-12 11:07:16.927198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:106008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.046 [2024-07-12 11:07:16.927205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.046 [2024-07-12 11:07:16.927215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.046 [2024-07-12 11:07:16.927222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.046 [2024-07-12 11:07:16.927231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:106024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.046 [2024-07-12 11:07:16.927238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.046 [2024-07-12 11:07:16.927248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:106032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.046 [2024-07-12 11:07:16.927255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.046 [2024-07-12 11:07:16.927264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.046 [2024-07-12 11:07:16.927271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.046 [2024-07-12 11:07:16.927280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:106944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:00.046 [2024-07-12 11:07:16.927288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.046 [2024-07-12 11:07:16.927297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:106048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.046 [2024-07-12 11:07:16.927305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.046 [2024-07-12 11:07:16.927315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:106056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.046 [2024-07-12 11:07:16.927322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.046 [2024-07-12 11:07:16.927332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:106064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.046 [2024-07-12 11:07:16.927339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.046 [2024-07-12 11:07:16.927348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:106072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.046 [2024-07-12 11:07:16.927355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.046 [2024-07-12 11:07:16.927367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:106080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.046 [2024-07-12 11:07:16.927374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.046 [2024-07-12 11:07:16.927383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:106088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.046 [2024-07-12 11:07:16.927390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.046 [2024-07-12 11:07:16.927400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:106096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.046 [2024-07-12 11:07:16.927407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.046 [2024-07-12 11:07:16.927416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:106104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.046 [2024-07-12 11:07:16.927423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.046 [2024-07-12 11:07:16.927432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fea00 is same with the state(5) to be set 00:29:00.046 [2024-07-12 11:07:16.927441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:00.046 [2024-07-12 11:07:16.927447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:00.046 [2024-07-12 11:07:16.927454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106952 len:8 PRP1 0x0 PRP2 0x0 00:29:00.046 [2024-07-12 11:07:16.927462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.046 [2024-07-12 11:07:16.927499] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10fea00 was disconnected and freed. reset controller. 00:29:00.046 [2024-07-12 11:07:16.930997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.046 [2024-07-12 11:07:16.931051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.046 [2024-07-12 11:07:16.931913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.047 [2024-07-12 11:07:16.931929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.047 [2024-07-12 11:07:16.931937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.047 [2024-07-12 11:07:16.932161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.047 [2024-07-12 11:07:16.932381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.047 [2024-07-12 11:07:16.932390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.047 [2024-07-12 11:07:16.932398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.047 [2024-07-12 11:07:16.935946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.047 [2024-07-12 11:07:16.945158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.047 [2024-07-12 11:07:16.945754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.047 [2024-07-12 11:07:16.945770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.047 [2024-07-12 11:07:16.945777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.047 [2024-07-12 11:07:16.946001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.047 [2024-07-12 11:07:16.946226] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.047 [2024-07-12 11:07:16.946235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.047 [2024-07-12 11:07:16.946242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.047 [2024-07-12 11:07:16.949791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.047 [2024-07-12 11:07:16.959014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.047 [2024-07-12 11:07:16.959752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.047 [2024-07-12 11:07:16.959792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.047 [2024-07-12 11:07:16.959802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.047 [2024-07-12 11:07:16.960045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.047 [2024-07-12 11:07:16.960279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.047 [2024-07-12 11:07:16.960289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.047 [2024-07-12 11:07:16.960297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.047 [2024-07-12 11:07:16.963867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.047 [2024-07-12 11:07:16.972884] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.047 [2024-07-12 11:07:16.973578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.047 [2024-07-12 11:07:16.973617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.047 [2024-07-12 11:07:16.973629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.047 [2024-07-12 11:07:16.973872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.047 [2024-07-12 11:07:16.974095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.047 [2024-07-12 11:07:16.974104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.047 [2024-07-12 11:07:16.974112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.047 [2024-07-12 11:07:16.977685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.047 [2024-07-12 11:07:16.986698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.047 [2024-07-12 11:07:16.987458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.047 [2024-07-12 11:07:16.987498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.047 [2024-07-12 11:07:16.987508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.047 [2024-07-12 11:07:16.987750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.047 [2024-07-12 11:07:16.987973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.047 [2024-07-12 11:07:16.987982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.047 [2024-07-12 11:07:16.987998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.047 [2024-07-12 11:07:16.991565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.047 [2024-07-12 11:07:17.000588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.047 [2024-07-12 11:07:17.001387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.047 [2024-07-12 11:07:17.001427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.047 [2024-07-12 11:07:17.001438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.047 [2024-07-12 11:07:17.001679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.047 [2024-07-12 11:07:17.001903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.047 [2024-07-12 11:07:17.001912] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.047 [2024-07-12 11:07:17.001920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.047 [2024-07-12 11:07:17.005479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.047 [2024-07-12 11:07:17.014485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.047 [2024-07-12 11:07:17.015224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.047 [2024-07-12 11:07:17.015263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.047 [2024-07-12 11:07:17.015275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.047 [2024-07-12 11:07:17.015518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.047 [2024-07-12 11:07:17.015741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.047 [2024-07-12 11:07:17.015750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.047 [2024-07-12 11:07:17.015758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.047 [2024-07-12 11:07:17.019320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.309 [2024-07-12 11:07:17.028336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.309 [2024-07-12 11:07:17.028932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.309 [2024-07-12 11:07:17.028950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.310 [2024-07-12 11:07:17.028958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.310 [2024-07-12 11:07:17.029187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.310 [2024-07-12 11:07:17.029408] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.310 [2024-07-12 11:07:17.029415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.310 [2024-07-12 11:07:17.029423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.310 [2024-07-12 11:07:17.032973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.310 [2024-07-12 11:07:17.042189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.310 [2024-07-12 11:07:17.042915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.310 [2024-07-12 11:07:17.042957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.310 [2024-07-12 11:07:17.042968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.310 [2024-07-12 11:07:17.043219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.310 [2024-07-12 11:07:17.043443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.310 [2024-07-12 11:07:17.043451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.310 [2024-07-12 11:07:17.043459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.310 [2024-07-12 11:07:17.047015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.310 [2024-07-12 11:07:17.056014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.310 [2024-07-12 11:07:17.056742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.310 [2024-07-12 11:07:17.056780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.310 [2024-07-12 11:07:17.056791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.310 [2024-07-12 11:07:17.057031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.310 [2024-07-12 11:07:17.057262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.310 [2024-07-12 11:07:17.057271] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.310 [2024-07-12 11:07:17.057279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.310 [2024-07-12 11:07:17.060830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.310 [2024-07-12 11:07:17.069851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.310 [2024-07-12 11:07:17.070549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.310 [2024-07-12 11:07:17.070586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.310 [2024-07-12 11:07:17.070597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.310 [2024-07-12 11:07:17.070837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.310 [2024-07-12 11:07:17.071060] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.310 [2024-07-12 11:07:17.071068] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.310 [2024-07-12 11:07:17.071076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.310 [2024-07-12 11:07:17.074636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.310 [2024-07-12 11:07:17.083840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.310 [2024-07-12 11:07:17.084459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.310 [2024-07-12 11:07:17.084497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.310 [2024-07-12 11:07:17.084507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.310 [2024-07-12 11:07:17.084747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.310 [2024-07-12 11:07:17.084974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.310 [2024-07-12 11:07:17.084983] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.310 [2024-07-12 11:07:17.084990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.310 [2024-07-12 11:07:17.088549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.310 [2024-07-12 11:07:17.097751] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.310 [2024-07-12 11:07:17.098496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.310 [2024-07-12 11:07:17.098533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.310 [2024-07-12 11:07:17.098544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.310 [2024-07-12 11:07:17.098783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.310 [2024-07-12 11:07:17.099006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.310 [2024-07-12 11:07:17.099015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.310 [2024-07-12 11:07:17.099022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.310 [2024-07-12 11:07:17.102581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.310 [2024-07-12 11:07:17.111579] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.310 [2024-07-12 11:07:17.112202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.310 [2024-07-12 11:07:17.112239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.310 [2024-07-12 11:07:17.112251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.310 [2024-07-12 11:07:17.112492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.310 [2024-07-12 11:07:17.112715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.310 [2024-07-12 11:07:17.112724] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.310 [2024-07-12 11:07:17.112732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.310 [2024-07-12 11:07:17.116295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.310 [2024-07-12 11:07:17.125500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.310 [2024-07-12 11:07:17.126223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.310 [2024-07-12 11:07:17.126260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.310 [2024-07-12 11:07:17.126272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.310 [2024-07-12 11:07:17.126515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.310 [2024-07-12 11:07:17.126738] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.310 [2024-07-12 11:07:17.126748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.310 [2024-07-12 11:07:17.126755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.310 [2024-07-12 11:07:17.130316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.310 [2024-07-12 11:07:17.139318] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.310 [2024-07-12 11:07:17.139950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.310 [2024-07-12 11:07:17.139988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.310 [2024-07-12 11:07:17.139999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.310 [2024-07-12 11:07:17.140247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.311 [2024-07-12 11:07:17.140471] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.311 [2024-07-12 11:07:17.140480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.311 [2024-07-12 11:07:17.140487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.311 [2024-07-12 11:07:17.144037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.311 [2024-07-12 11:07:17.153253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.311 [2024-07-12 11:07:17.153832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.311 [2024-07-12 11:07:17.153869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.311 [2024-07-12 11:07:17.153880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.311 [2024-07-12 11:07:17.154120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.311 [2024-07-12 11:07:17.154352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.311 [2024-07-12 11:07:17.154361] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.311 [2024-07-12 11:07:17.154368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.311 [2024-07-12 11:07:17.157925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.311 [2024-07-12 11:07:17.167144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.311 [2024-07-12 11:07:17.167857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.311 [2024-07-12 11:07:17.167894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.311 [2024-07-12 11:07:17.167904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.311 [2024-07-12 11:07:17.168152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.311 [2024-07-12 11:07:17.168376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.311 [2024-07-12 11:07:17.168384] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.311 [2024-07-12 11:07:17.168392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.311 [2024-07-12 11:07:17.171942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.311 [2024-07-12 11:07:17.180935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.311 [2024-07-12 11:07:17.181621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.311 [2024-07-12 11:07:17.181658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.311 [2024-07-12 11:07:17.181674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.311 [2024-07-12 11:07:17.181914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.311 [2024-07-12 11:07:17.182145] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.311 [2024-07-12 11:07:17.182154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.311 [2024-07-12 11:07:17.182161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.311 [2024-07-12 11:07:17.185713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.311 [2024-07-12 11:07:17.194926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.311 [2024-07-12 11:07:17.195718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.311 [2024-07-12 11:07:17.195756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.311 [2024-07-12 11:07:17.195767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.311 [2024-07-12 11:07:17.196006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.311 [2024-07-12 11:07:17.196237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.311 [2024-07-12 11:07:17.196246] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.311 [2024-07-12 11:07:17.196254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.311 [2024-07-12 11:07:17.199807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.311 [2024-07-12 11:07:17.208805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.311 [2024-07-12 11:07:17.209442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.311 [2024-07-12 11:07:17.209479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.311 [2024-07-12 11:07:17.209490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.311 [2024-07-12 11:07:17.209729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.311 [2024-07-12 11:07:17.209953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.311 [2024-07-12 11:07:17.209961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.311 [2024-07-12 11:07:17.209969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.311 [2024-07-12 11:07:17.213524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.311 [2024-07-12 11:07:17.222728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.311 [2024-07-12 11:07:17.223432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.311 [2024-07-12 11:07:17.223469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.311 [2024-07-12 11:07:17.223481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.311 [2024-07-12 11:07:17.223723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.311 [2024-07-12 11:07:17.223946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.311 [2024-07-12 11:07:17.223959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.311 [2024-07-12 11:07:17.223967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.311 [2024-07-12 11:07:17.227526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.311 [2024-07-12 11:07:17.236521] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.311 [2024-07-12 11:07:17.237222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.311 [2024-07-12 11:07:17.237259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.311 [2024-07-12 11:07:17.237271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.311 [2024-07-12 11:07:17.237515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.311 [2024-07-12 11:07:17.237738] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.311 [2024-07-12 11:07:17.237747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.311 [2024-07-12 11:07:17.237754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.311 [2024-07-12 11:07:17.241314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.311 [2024-07-12 11:07:17.250321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.311 [2024-07-12 11:07:17.251019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.311 [2024-07-12 11:07:17.251056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.311 [2024-07-12 11:07:17.251067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.311 [2024-07-12 11:07:17.251314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.312 [2024-07-12 11:07:17.251539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.312 [2024-07-12 11:07:17.251548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.312 [2024-07-12 11:07:17.251557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.312 [2024-07-12 11:07:17.255113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.312 [2024-07-12 11:07:17.264114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.312 [2024-07-12 11:07:17.264824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.312 [2024-07-12 11:07:17.264861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.312 [2024-07-12 11:07:17.264872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.312 [2024-07-12 11:07:17.265112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.312 [2024-07-12 11:07:17.265343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.312 [2024-07-12 11:07:17.265352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.312 [2024-07-12 11:07:17.265360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.312 [2024-07-12 11:07:17.268911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.312 [2024-07-12 11:07:17.278120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.312 [2024-07-12 11:07:17.278842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.312 [2024-07-12 11:07:17.278879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.312 [2024-07-12 11:07:17.278890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.312 [2024-07-12 11:07:17.279137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.312 [2024-07-12 11:07:17.279361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.312 [2024-07-12 11:07:17.279370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.312 [2024-07-12 11:07:17.279377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.312 [2024-07-12 11:07:17.282929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.574 [2024-07-12 11:07:17.291933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.574 [2024-07-12 11:07:17.292419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.574 [2024-07-12 11:07:17.292440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.574 [2024-07-12 11:07:17.292448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.574 [2024-07-12 11:07:17.292670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.574 [2024-07-12 11:07:17.292890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.574 [2024-07-12 11:07:17.292898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.574 [2024-07-12 11:07:17.292905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.574 [2024-07-12 11:07:17.296471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.574 [2024-07-12 11:07:17.305890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.574 [2024-07-12 11:07:17.306489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.574 [2024-07-12 11:07:17.306505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.574 [2024-07-12 11:07:17.306513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.574 [2024-07-12 11:07:17.306732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.574 [2024-07-12 11:07:17.306951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.574 [2024-07-12 11:07:17.306959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.574 [2024-07-12 11:07:17.306966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.574 [2024-07-12 11:07:17.310518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.574 [2024-07-12 11:07:17.319724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.574 [2024-07-12 11:07:17.320426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.574 [2024-07-12 11:07:17.320464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.574 [2024-07-12 11:07:17.320474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.574 [2024-07-12 11:07:17.320719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.574 [2024-07-12 11:07:17.320942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.574 [2024-07-12 11:07:17.320951] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.574 [2024-07-12 11:07:17.320958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.574 [2024-07-12 11:07:17.324518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.574 [2024-07-12 11:07:17.333519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.574 [2024-07-12 11:07:17.334220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.574 [2024-07-12 11:07:17.334257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.574 [2024-07-12 11:07:17.334269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.574 [2024-07-12 11:07:17.334512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.574 [2024-07-12 11:07:17.334735] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.574 [2024-07-12 11:07:17.334744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.574 [2024-07-12 11:07:17.334751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.574 [2024-07-12 11:07:17.338313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.574 [2024-07-12 11:07:17.347312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.574 [2024-07-12 11:07:17.348048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.574 [2024-07-12 11:07:17.348084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.574 [2024-07-12 11:07:17.348096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.574 [2024-07-12 11:07:17.348348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.574 [2024-07-12 11:07:17.348572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.574 [2024-07-12 11:07:17.348580] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.574 [2024-07-12 11:07:17.348588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.574 [2024-07-12 11:07:17.352140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.574 [2024-07-12 11:07:17.361153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.574 [2024-07-12 11:07:17.361867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.574 [2024-07-12 11:07:17.361904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.574 [2024-07-12 11:07:17.361915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.574 [2024-07-12 11:07:17.362163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.574 [2024-07-12 11:07:17.362387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.574 [2024-07-12 11:07:17.362395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.574 [2024-07-12 11:07:17.362407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.574 [2024-07-12 11:07:17.365971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.574 [2024-07-12 11:07:17.374970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.574 [2024-07-12 11:07:17.375549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.574 [2024-07-12 11:07:17.375586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.574 [2024-07-12 11:07:17.375596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.574 [2024-07-12 11:07:17.375836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.574 [2024-07-12 11:07:17.376059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.574 [2024-07-12 11:07:17.376068] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.574 [2024-07-12 11:07:17.376075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.574 [2024-07-12 11:07:17.379635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.574 [2024-07-12 11:07:17.388842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.574 [2024-07-12 11:07:17.389540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.574 [2024-07-12 11:07:17.389577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.574 [2024-07-12 11:07:17.389588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.574 [2024-07-12 11:07:17.389827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.574 [2024-07-12 11:07:17.390050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.574 [2024-07-12 11:07:17.390059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.574 [2024-07-12 11:07:17.390066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.574 [2024-07-12 11:07:17.393626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.574 [2024-07-12 11:07:17.402831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.574 [2024-07-12 11:07:17.403491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.574 [2024-07-12 11:07:17.403528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.574 [2024-07-12 11:07:17.403538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.574 [2024-07-12 11:07:17.403778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.574 [2024-07-12 11:07:17.404001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.574 [2024-07-12 11:07:17.404010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.574 [2024-07-12 11:07:17.404017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.574 [2024-07-12 11:07:17.407577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.574 [2024-07-12 11:07:17.416781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.574 [2024-07-12 11:07:17.417473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.574 [2024-07-12 11:07:17.417515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.574 [2024-07-12 11:07:17.417525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.574 [2024-07-12 11:07:17.417765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.574 [2024-07-12 11:07:17.417988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.575 [2024-07-12 11:07:17.417996] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.575 [2024-07-12 11:07:17.418004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.575 [2024-07-12 11:07:17.421561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.575 [2024-07-12 11:07:17.430768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.575 [2024-07-12 11:07:17.431484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.575 [2024-07-12 11:07:17.431521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.575 [2024-07-12 11:07:17.431532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.575 [2024-07-12 11:07:17.431772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.575 [2024-07-12 11:07:17.431995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.575 [2024-07-12 11:07:17.432004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.575 [2024-07-12 11:07:17.432011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.575 [2024-07-12 11:07:17.435571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.575 [2024-07-12 11:07:17.444569] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.575 [2024-07-12 11:07:17.445252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.575 [2024-07-12 11:07:17.445289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.575 [2024-07-12 11:07:17.445301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.575 [2024-07-12 11:07:17.445542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.575 [2024-07-12 11:07:17.445766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.575 [2024-07-12 11:07:17.445774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.575 [2024-07-12 11:07:17.445781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.575 [2024-07-12 11:07:17.449451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.575 [2024-07-12 11:07:17.458455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.575 [2024-07-12 11:07:17.459114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.575 [2024-07-12 11:07:17.459137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.575 [2024-07-12 11:07:17.459145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.575 [2024-07-12 11:07:17.459365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.575 [2024-07-12 11:07:17.459590] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.575 [2024-07-12 11:07:17.459598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.575 [2024-07-12 11:07:17.459605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.575 [2024-07-12 11:07:17.463155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.575 [2024-07-12 11:07:17.472583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.575 [2024-07-12 11:07:17.473060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.575 [2024-07-12 11:07:17.473077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.575 [2024-07-12 11:07:17.473085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.575 [2024-07-12 11:07:17.473311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.575 [2024-07-12 11:07:17.473532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.575 [2024-07-12 11:07:17.473539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.575 [2024-07-12 11:07:17.473547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.575 [2024-07-12 11:07:17.477089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.575 [2024-07-12 11:07:17.486497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.575 [2024-07-12 11:07:17.487219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.575 [2024-07-12 11:07:17.487256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.575 [2024-07-12 11:07:17.487269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.575 [2024-07-12 11:07:17.487512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.575 [2024-07-12 11:07:17.487735] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.575 [2024-07-12 11:07:17.487743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.575 [2024-07-12 11:07:17.487751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.575 [2024-07-12 11:07:17.491310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.575 [2024-07-12 11:07:17.500306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.575 [2024-07-12 11:07:17.501019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.575 [2024-07-12 11:07:17.501056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.575 [2024-07-12 11:07:17.501067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.575 [2024-07-12 11:07:17.501315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.575 [2024-07-12 11:07:17.501539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.575 [2024-07-12 11:07:17.501548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.575 [2024-07-12 11:07:17.501555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.575 [2024-07-12 11:07:17.505104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.575 [2024-07-12 11:07:17.514108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.575 [2024-07-12 11:07:17.514805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.575 [2024-07-12 11:07:17.514843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.575 [2024-07-12 11:07:17.514853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.575 [2024-07-12 11:07:17.515093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.575 [2024-07-12 11:07:17.515325] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.575 [2024-07-12 11:07:17.515335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.575 [2024-07-12 11:07:17.515342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.575 [2024-07-12 11:07:17.518893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.575 [2024-07-12 11:07:17.528097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.575 [2024-07-12 11:07:17.528822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.575 [2024-07-12 11:07:17.528859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.575 [2024-07-12 11:07:17.528869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.575 [2024-07-12 11:07:17.529109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.575 [2024-07-12 11:07:17.529341] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.575 [2024-07-12 11:07:17.529350] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.575 [2024-07-12 11:07:17.529358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.575 [2024-07-12 11:07:17.532910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.575 [2024-07-12 11:07:17.541904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.575 [2024-07-12 11:07:17.542601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.575 [2024-07-12 11:07:17.542638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.575 [2024-07-12 11:07:17.542648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.575 [2024-07-12 11:07:17.542887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.575 [2024-07-12 11:07:17.543110] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.575 [2024-07-12 11:07:17.543119] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.575 [2024-07-12 11:07:17.543135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.575 [2024-07-12 11:07:17.546684] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.575 [2024-07-12 11:07:17.555901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.837 [2024-07-12 11:07:17.556520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.837 [2024-07-12 11:07:17.556537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.837 [2024-07-12 11:07:17.556549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.837 [2024-07-12 11:07:17.556770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.837 [2024-07-12 11:07:17.556989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.837 [2024-07-12 11:07:17.556997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.837 [2024-07-12 11:07:17.557004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.837 [2024-07-12 11:07:17.560557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.837 [2024-07-12 11:07:17.569773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.837 [2024-07-12 11:07:17.570241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.837 [2024-07-12 11:07:17.570259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.837 [2024-07-12 11:07:17.570267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.837 [2024-07-12 11:07:17.570487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.837 [2024-07-12 11:07:17.570706] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.838 [2024-07-12 11:07:17.570713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.838 [2024-07-12 11:07:17.570720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.838 [2024-07-12 11:07:17.574269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.838 [2024-07-12 11:07:17.583677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.838 [2024-07-12 11:07:17.584412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.838 [2024-07-12 11:07:17.584449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.838 [2024-07-12 11:07:17.584460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.838 [2024-07-12 11:07:17.584700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.838 [2024-07-12 11:07:17.584923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.838 [2024-07-12 11:07:17.584931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.838 [2024-07-12 11:07:17.584939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.838 [2024-07-12 11:07:17.588496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.838 [2024-07-12 11:07:17.597503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.838 [2024-07-12 11:07:17.598222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.838 [2024-07-12 11:07:17.598259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.838 [2024-07-12 11:07:17.598271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.838 [2024-07-12 11:07:17.598513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.838 [2024-07-12 11:07:17.598735] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.838 [2024-07-12 11:07:17.598748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.838 [2024-07-12 11:07:17.598756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.838 [2024-07-12 11:07:17.602316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.838 [2024-07-12 11:07:17.611310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.838 [2024-07-12 11:07:17.612044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.838 [2024-07-12 11:07:17.612081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.838 [2024-07-12 11:07:17.612093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.838 [2024-07-12 11:07:17.612345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.838 [2024-07-12 11:07:17.612569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.838 [2024-07-12 11:07:17.612577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.838 [2024-07-12 11:07:17.612585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.838 [2024-07-12 11:07:17.616137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.838 [2024-07-12 11:07:17.625146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.838 [2024-07-12 11:07:17.625763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.838 [2024-07-12 11:07:17.625800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.838 [2024-07-12 11:07:17.625811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.838 [2024-07-12 11:07:17.626051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.838 [2024-07-12 11:07:17.626281] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.838 [2024-07-12 11:07:17.626291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.838 [2024-07-12 11:07:17.626299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.838 [2024-07-12 11:07:17.629854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.838 [2024-07-12 11:07:17.639066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.838 [2024-07-12 11:07:17.639721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.838 [2024-07-12 11:07:17.639740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.838 [2024-07-12 11:07:17.639747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.838 [2024-07-12 11:07:17.639968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.838 [2024-07-12 11:07:17.640233] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.838 [2024-07-12 11:07:17.640242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.838 [2024-07-12 11:07:17.640249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.838 [2024-07-12 11:07:17.643798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.838 [2024-07-12 11:07:17.653001] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.838 [2024-07-12 11:07:17.653695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.838 [2024-07-12 11:07:17.653732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.838 [2024-07-12 11:07:17.653742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.838 [2024-07-12 11:07:17.653982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.838 [2024-07-12 11:07:17.654212] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.838 [2024-07-12 11:07:17.654222] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.838 [2024-07-12 11:07:17.654229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.838 [2024-07-12 11:07:17.657780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.838 [2024-07-12 11:07:17.666997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.838 [2024-07-12 11:07:17.667743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.838 [2024-07-12 11:07:17.667780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.838 [2024-07-12 11:07:17.667793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.838 [2024-07-12 11:07:17.668034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.838 [2024-07-12 11:07:17.668265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.838 [2024-07-12 11:07:17.668274] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.838 [2024-07-12 11:07:17.668282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.838 [2024-07-12 11:07:17.671834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.838 [2024-07-12 11:07:17.680829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.838 [2024-07-12 11:07:17.681534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.838 [2024-07-12 11:07:17.681571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.838 [2024-07-12 11:07:17.681581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.838 [2024-07-12 11:07:17.681821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.838 [2024-07-12 11:07:17.682044] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.838 [2024-07-12 11:07:17.682052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.838 [2024-07-12 11:07:17.682060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.838 [2024-07-12 11:07:17.685622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.838 [2024-07-12 11:07:17.694622] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.838 [2024-07-12 11:07:17.695222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.838 [2024-07-12 11:07:17.695259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.838 [2024-07-12 11:07:17.695271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.838 [2024-07-12 11:07:17.695520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.838 [2024-07-12 11:07:17.695743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.838 [2024-07-12 11:07:17.695752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.838 [2024-07-12 11:07:17.695759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.838 [2024-07-12 11:07:17.699317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.838 [2024-07-12 11:07:17.708522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.838 [2024-07-12 11:07:17.709223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.838 [2024-07-12 11:07:17.709260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.838 [2024-07-12 11:07:17.709271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.838 [2024-07-12 11:07:17.709510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.838 [2024-07-12 11:07:17.709733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.838 [2024-07-12 11:07:17.709741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.838 [2024-07-12 11:07:17.709749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.838 [2024-07-12 11:07:17.713309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.838 [2024-07-12 11:07:17.722514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.838 [2024-07-12 11:07:17.722987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.838 [2024-07-12 11:07:17.723005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.838 [2024-07-12 11:07:17.723013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.838 [2024-07-12 11:07:17.723238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.839 [2024-07-12 11:07:17.723457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.839 [2024-07-12 11:07:17.723465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.839 [2024-07-12 11:07:17.723472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.839 [2024-07-12 11:07:17.727016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.839 [2024-07-12 11:07:17.736427] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.839 [2024-07-12 11:07:17.737140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.839 [2024-07-12 11:07:17.737177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.839 [2024-07-12 11:07:17.737190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.839 [2024-07-12 11:07:17.737433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.839 [2024-07-12 11:07:17.737656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.839 [2024-07-12 11:07:17.737664] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.839 [2024-07-12 11:07:17.737677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.839 [2024-07-12 11:07:17.741240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.839 [2024-07-12 11:07:17.750237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.839 [2024-07-12 11:07:17.750974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.839 [2024-07-12 11:07:17.751011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.839 [2024-07-12 11:07:17.751022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.839 [2024-07-12 11:07:17.751269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.839 [2024-07-12 11:07:17.751493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.839 [2024-07-12 11:07:17.751501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.839 [2024-07-12 11:07:17.751509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.839 [2024-07-12 11:07:17.755058] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.839 [2024-07-12 11:07:17.764067] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.839 [2024-07-12 11:07:17.764821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.839 [2024-07-12 11:07:17.764858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.839 [2024-07-12 11:07:17.764869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.839 [2024-07-12 11:07:17.765108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.839 [2024-07-12 11:07:17.765348] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.839 [2024-07-12 11:07:17.765358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.839 [2024-07-12 11:07:17.765365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.839 [2024-07-12 11:07:17.768916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.839 [2024-07-12 11:07:17.777919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.839 [2024-07-12 11:07:17.778624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.839 [2024-07-12 11:07:17.778661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.839 [2024-07-12 11:07:17.778672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.839 [2024-07-12 11:07:17.778911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.839 [2024-07-12 11:07:17.779143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.839 [2024-07-12 11:07:17.779152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.839 [2024-07-12 11:07:17.779160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.839 [2024-07-12 11:07:17.782712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.839 [2024-07-12 11:07:17.791914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.839 [2024-07-12 11:07:17.792638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.839 [2024-07-12 11:07:17.792679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.839 [2024-07-12 11:07:17.792690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.839 [2024-07-12 11:07:17.792930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.839 [2024-07-12 11:07:17.793161] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.839 [2024-07-12 11:07:17.793170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.839 [2024-07-12 11:07:17.793178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.839 [2024-07-12 11:07:17.796729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.839 [2024-07-12 11:07:17.805721] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.839 [2024-07-12 11:07:17.806310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.839 [2024-07-12 11:07:17.806329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:00.839 [2024-07-12 11:07:17.806337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:00.839 [2024-07-12 11:07:17.806557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:00.839 [2024-07-12 11:07:17.806776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.839 [2024-07-12 11:07:17.806784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.839 [2024-07-12 11:07:17.806791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.839 [2024-07-12 11:07:17.810339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.839 [2024-07-12 11:07:17.819541] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.101 [2024-07-12 11:07:17.820222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.101 [2024-07-12 11:07:17.820260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.101 [2024-07-12 11:07:17.820273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.101 [2024-07-12 11:07:17.820514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.101 [2024-07-12 11:07:17.820737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.101 [2024-07-12 11:07:17.820745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.101 [2024-07-12 11:07:17.820753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.101 [2024-07-12 11:07:17.824312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.101 [2024-07-12 11:07:17.833548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.101 [2024-07-12 11:07:17.834020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.101 [2024-07-12 11:07:17.834041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.101 [2024-07-12 11:07:17.834049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.101 [2024-07-12 11:07:17.834277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.101 [2024-07-12 11:07:17.834503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.101 [2024-07-12 11:07:17.834511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.101 [2024-07-12 11:07:17.834518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.101 [2024-07-12 11:07:17.838063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.101 [2024-07-12 11:07:17.847471] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.101 [2024-07-12 11:07:17.848201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.101 [2024-07-12 11:07:17.848237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.101 [2024-07-12 11:07:17.848249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.101 [2024-07-12 11:07:17.848488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.101 [2024-07-12 11:07:17.848711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.101 [2024-07-12 11:07:17.848719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.101 [2024-07-12 11:07:17.848727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.101 [2024-07-12 11:07:17.852288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.101 [2024-07-12 11:07:17.861296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.101 [2024-07-12 11:07:17.862034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.101 [2024-07-12 11:07:17.862071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.101 [2024-07-12 11:07:17.862083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.101 [2024-07-12 11:07:17.862335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.101 [2024-07-12 11:07:17.862559] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.101 [2024-07-12 11:07:17.862568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.101 [2024-07-12 11:07:17.862575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.101 [2024-07-12 11:07:17.866141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.101 [2024-07-12 11:07:17.875151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.101 [2024-07-12 11:07:17.875889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.102 [2024-07-12 11:07:17.875926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.102 [2024-07-12 11:07:17.875937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.102 [2024-07-12 11:07:17.876184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.102 [2024-07-12 11:07:17.876408] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.102 [2024-07-12 11:07:17.876416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.102 [2024-07-12 11:07:17.876424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.102 [2024-07-12 11:07:17.879983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.102 [2024-07-12 11:07:17.888988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.102 [2024-07-12 11:07:17.889694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.102 [2024-07-12 11:07:17.889732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.102 [2024-07-12 11:07:17.889742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.102 [2024-07-12 11:07:17.889982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.102 [2024-07-12 11:07:17.890211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.102 [2024-07-12 11:07:17.890220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.102 [2024-07-12 11:07:17.890227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.102 [2024-07-12 11:07:17.893782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.102 [2024-07-12 11:07:17.902805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.102 [2024-07-12 11:07:17.903527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.102 [2024-07-12 11:07:17.903565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.102 [2024-07-12 11:07:17.903575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.102 [2024-07-12 11:07:17.903815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.102 [2024-07-12 11:07:17.904038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.102 [2024-07-12 11:07:17.904047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.102 [2024-07-12 11:07:17.904055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.102 [2024-07-12 11:07:17.907615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.102 [2024-07-12 11:07:17.916623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.102 [2024-07-12 11:07:17.917256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.102 [2024-07-12 11:07:17.917293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.102 [2024-07-12 11:07:17.917305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.102 [2024-07-12 11:07:17.917548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.102 [2024-07-12 11:07:17.917771] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.102 [2024-07-12 11:07:17.917780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.102 [2024-07-12 11:07:17.917787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.102 [2024-07-12 11:07:17.921346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.102 [2024-07-12 11:07:17.930562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.102 [2024-07-12 11:07:17.931220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.102 [2024-07-12 11:07:17.931258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.102 [2024-07-12 11:07:17.931275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.102 [2024-07-12 11:07:17.931516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.102 [2024-07-12 11:07:17.931739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.102 [2024-07-12 11:07:17.931748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.102 [2024-07-12 11:07:17.931756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.102 [2024-07-12 11:07:17.935316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.102 [2024-07-12 11:07:17.944531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.102 [2024-07-12 11:07:17.945144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.102 [2024-07-12 11:07:17.945165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.102 [2024-07-12 11:07:17.945173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.102 [2024-07-12 11:07:17.945394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.102 [2024-07-12 11:07:17.945613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.102 [2024-07-12 11:07:17.945621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.102 [2024-07-12 11:07:17.945628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.102 [2024-07-12 11:07:17.949180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.102 [2024-07-12 11:07:17.958425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.102 [2024-07-12 11:07:17.958950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.102 [2024-07-12 11:07:17.958966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.102 [2024-07-12 11:07:17.958973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.102 [2024-07-12 11:07:17.959199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.102 [2024-07-12 11:07:17.959419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.102 [2024-07-12 11:07:17.959427] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.102 [2024-07-12 11:07:17.959434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.102 [2024-07-12 11:07:17.963141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.102 [2024-07-12 11:07:17.972366] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.102 [2024-07-12 11:07:17.973056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.102 [2024-07-12 11:07:17.973093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.102 [2024-07-12 11:07:17.973105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.102 [2024-07-12 11:07:17.973354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.102 [2024-07-12 11:07:17.973578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.102 [2024-07-12 11:07:17.973591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.102 [2024-07-12 11:07:17.973599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.102 [2024-07-12 11:07:17.977156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.102 [2024-07-12 11:07:17.986159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.102 [2024-07-12 11:07:17.986759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.102 [2024-07-12 11:07:17.986778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.102 [2024-07-12 11:07:17.986785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.102 [2024-07-12 11:07:17.987005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.102 [2024-07-12 11:07:17.987231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.102 [2024-07-12 11:07:17.987239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.102 [2024-07-12 11:07:17.987246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.102 [2024-07-12 11:07:17.990795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.102 [2024-07-12 11:07:17.998848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.102 [2024-07-12 11:07:17.999487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.102 [2024-07-12 11:07:17.999517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.102 [2024-07-12 11:07:17.999526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.102 [2024-07-12 11:07:17.999693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.102 [2024-07-12 11:07:17.999847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.103 [2024-07-12 11:07:17.999853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.103 [2024-07-12 11:07:17.999858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.103 [2024-07-12 11:07:18.002304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.103 [2024-07-12 11:07:18.011496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.103 [2024-07-12 11:07:18.012164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.103 [2024-07-12 11:07:18.012194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.103 [2024-07-12 11:07:18.012202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.103 [2024-07-12 11:07:18.012372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.103 [2024-07-12 11:07:18.012525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.103 [2024-07-12 11:07:18.012531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.103 [2024-07-12 11:07:18.012537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.103 [2024-07-12 11:07:18.014981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.103 [2024-07-12 11:07:18.024175] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.103 [2024-07-12 11:07:18.024723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.103 [2024-07-12 11:07:18.024737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.103 [2024-07-12 11:07:18.024743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.103 [2024-07-12 11:07:18.024894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.103 [2024-07-12 11:07:18.025046] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.103 [2024-07-12 11:07:18.025051] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.103 [2024-07-12 11:07:18.025056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.103 [2024-07-12 11:07:18.027496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.103 [2024-07-12 11:07:18.036828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.103 [2024-07-12 11:07:18.037351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.103 [2024-07-12 11:07:18.037364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.103 [2024-07-12 11:07:18.037369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.103 [2024-07-12 11:07:18.037520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.103 [2024-07-12 11:07:18.037671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.103 [2024-07-12 11:07:18.037677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.103 [2024-07-12 11:07:18.037681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.103 [2024-07-12 11:07:18.040114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.103 [2024-07-12 11:07:18.049446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.103 [2024-07-12 11:07:18.049779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.103 [2024-07-12 11:07:18.049790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.103 [2024-07-12 11:07:18.049796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.103 [2024-07-12 11:07:18.049946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.103 [2024-07-12 11:07:18.050097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.103 [2024-07-12 11:07:18.050102] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.103 [2024-07-12 11:07:18.050108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.103 [2024-07-12 11:07:18.052547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.103 [2024-07-12 11:07:18.062165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.103 [2024-07-12 11:07:18.062822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.103 [2024-07-12 11:07:18.062852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.103 [2024-07-12 11:07:18.062860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.103 [2024-07-12 11:07:18.063031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.103 [2024-07-12 11:07:18.063192] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.103 [2024-07-12 11:07:18.063199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.103 [2024-07-12 11:07:18.063205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.103 [2024-07-12 11:07:18.065654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.103 [2024-07-12 11:07:18.074845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.103 [2024-07-12 11:07:18.075488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.103 [2024-07-12 11:07:18.075518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.103 [2024-07-12 11:07:18.075527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.103 [2024-07-12 11:07:18.075694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.103 [2024-07-12 11:07:18.075848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.103 [2024-07-12 11:07:18.075854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.103 [2024-07-12 11:07:18.075859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.103 [2024-07-12 11:07:18.078304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.366 [2024-07-12 11:07:18.087497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.366 [2024-07-12 11:07:18.088051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.366 [2024-07-12 11:07:18.088065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.366 [2024-07-12 11:07:18.088070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.366 [2024-07-12 11:07:18.088227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.366 [2024-07-12 11:07:18.088379] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.366 [2024-07-12 11:07:18.088384] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.366 [2024-07-12 11:07:18.088389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.366 [2024-07-12 11:07:18.090823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.366 [2024-07-12 11:07:18.100157] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.366 [2024-07-12 11:07:18.100677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.366 [2024-07-12 11:07:18.100689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.366 [2024-07-12 11:07:18.100694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.366 [2024-07-12 11:07:18.100845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.366 [2024-07-12 11:07:18.100996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.366 [2024-07-12 11:07:18.101001] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.366 [2024-07-12 11:07:18.101010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.366 [2024-07-12 11:07:18.103449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.366 [2024-07-12 11:07:18.112778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.366 [2024-07-12 11:07:18.113298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.366 [2024-07-12 11:07:18.113310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.366 [2024-07-12 11:07:18.113315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.366 [2024-07-12 11:07:18.113466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.366 [2024-07-12 11:07:18.113617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.366 [2024-07-12 11:07:18.113622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.366 [2024-07-12 11:07:18.113627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.366 [2024-07-12 11:07:18.116061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.366 [2024-07-12 11:07:18.125395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.366 [2024-07-12 11:07:18.125915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.366 [2024-07-12 11:07:18.125926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.366 [2024-07-12 11:07:18.125932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.366 [2024-07-12 11:07:18.126082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.366 [2024-07-12 11:07:18.126237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.366 [2024-07-12 11:07:18.126243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.366 [2024-07-12 11:07:18.126248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.366 [2024-07-12 11:07:18.128681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.366 [2024-07-12 11:07:18.138013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.366 [2024-07-12 11:07:18.138539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.366 [2024-07-12 11:07:18.138550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.366 [2024-07-12 11:07:18.138555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.366 [2024-07-12 11:07:18.138706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.366 [2024-07-12 11:07:18.138857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.366 [2024-07-12 11:07:18.138862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.366 [2024-07-12 11:07:18.138867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.366 [2024-07-12 11:07:18.141303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.366 [2024-07-12 11:07:18.150633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.366 [2024-07-12 11:07:18.151173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.366 [2024-07-12 11:07:18.151187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.366 [2024-07-12 11:07:18.151193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.366 [2024-07-12 11:07:18.151344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.366 [2024-07-12 11:07:18.151494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.366 [2024-07-12 11:07:18.151500] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.366 [2024-07-12 11:07:18.151505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.366 [2024-07-12 11:07:18.153939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.366 [2024-07-12 11:07:18.163281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.366 [2024-07-12 11:07:18.163837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.366 [2024-07-12 11:07:18.163848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.366 [2024-07-12 11:07:18.163854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.367 [2024-07-12 11:07:18.164004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.367 [2024-07-12 11:07:18.164159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.367 [2024-07-12 11:07:18.164165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.367 [2024-07-12 11:07:18.164170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.367 [2024-07-12 11:07:18.166616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.367 [2024-07-12 11:07:18.175946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.367 [2024-07-12 11:07:18.176589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.367 [2024-07-12 11:07:18.176619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.367 [2024-07-12 11:07:18.176628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.367 [2024-07-12 11:07:18.176796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.367 [2024-07-12 11:07:18.176950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.367 [2024-07-12 11:07:18.176956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.367 [2024-07-12 11:07:18.176961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.367 [2024-07-12 11:07:18.179404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.367 [2024-07-12 11:07:18.188596] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.367 [2024-07-12 11:07:18.189259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.367 [2024-07-12 11:07:18.189289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.367 [2024-07-12 11:07:18.189297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.367 [2024-07-12 11:07:18.189467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.367 [2024-07-12 11:07:18.189625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.367 [2024-07-12 11:07:18.189631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.367 [2024-07-12 11:07:18.189637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.367 [2024-07-12 11:07:18.192083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.367 [2024-07-12 11:07:18.201286] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.367 [2024-07-12 11:07:18.201888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.367 [2024-07-12 11:07:18.201918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.367 [2024-07-12 11:07:18.201926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.367 [2024-07-12 11:07:18.202094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.367 [2024-07-12 11:07:18.202255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.367 [2024-07-12 11:07:18.202262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.367 [2024-07-12 11:07:18.202267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.367 [2024-07-12 11:07:18.204706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.367 [2024-07-12 11:07:18.213899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.367 [2024-07-12 11:07:18.214435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.367 [2024-07-12 11:07:18.214450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.367 [2024-07-12 11:07:18.214455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.367 [2024-07-12 11:07:18.214607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.367 [2024-07-12 11:07:18.214758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.367 [2024-07-12 11:07:18.214763] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.367 [2024-07-12 11:07:18.214768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.367 [2024-07-12 11:07:18.217281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.367 [2024-07-12 11:07:18.226614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.367 [2024-07-12 11:07:18.227165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.367 [2024-07-12 11:07:18.227178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.367 [2024-07-12 11:07:18.227183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.367 [2024-07-12 11:07:18.227334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.367 [2024-07-12 11:07:18.227485] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.367 [2024-07-12 11:07:18.227490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.367 [2024-07-12 11:07:18.227496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.367 [2024-07-12 11:07:18.229937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.367 [2024-07-12 11:07:18.239267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.367 [2024-07-12 11:07:18.239819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.367 [2024-07-12 11:07:18.239831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.367 [2024-07-12 11:07:18.239837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.367 [2024-07-12 11:07:18.239987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.367 [2024-07-12 11:07:18.240143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.367 [2024-07-12 11:07:18.240150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.367 [2024-07-12 11:07:18.240154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.367 [2024-07-12 11:07:18.242588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.367 [2024-07-12 11:07:18.251912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.367 [2024-07-12 11:07:18.252539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.367 [2024-07-12 11:07:18.252569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.367 [2024-07-12 11:07:18.252578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.367 [2024-07-12 11:07:18.252745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.367 [2024-07-12 11:07:18.252899] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.367 [2024-07-12 11:07:18.252905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.367 [2024-07-12 11:07:18.252911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.367 [2024-07-12 11:07:18.255355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.367 [2024-07-12 11:07:18.264546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.367 [2024-07-12 11:07:18.264994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.367 [2024-07-12 11:07:18.265008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.367 [2024-07-12 11:07:18.265014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.367 [2024-07-12 11:07:18.265178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.367 [2024-07-12 11:07:18.265330] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.367 [2024-07-12 11:07:18.265335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.367 [2024-07-12 11:07:18.265341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.367 [2024-07-12 11:07:18.267778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.367 [2024-07-12 11:07:18.277264] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.367 [2024-07-12 11:07:18.277822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.367 [2024-07-12 11:07:18.277834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.367 [2024-07-12 11:07:18.277843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.367 [2024-07-12 11:07:18.277994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.367 [2024-07-12 11:07:18.278150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.367 [2024-07-12 11:07:18.278156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.367 [2024-07-12 11:07:18.278161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.367 [2024-07-12 11:07:18.280596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.367 [2024-07-12 11:07:18.289930] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.367 [2024-07-12 11:07:18.290455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.367 [2024-07-12 11:07:18.290467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.367 [2024-07-12 11:07:18.290472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.367 [2024-07-12 11:07:18.290623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.367 [2024-07-12 11:07:18.290774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.367 [2024-07-12 11:07:18.290780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.367 [2024-07-12 11:07:18.290785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.367 [2024-07-12 11:07:18.293222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.367 [2024-07-12 11:07:18.302560] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.367 [2024-07-12 11:07:18.302982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.367 [2024-07-12 11:07:18.302993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.368 [2024-07-12 11:07:18.302998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.368 [2024-07-12 11:07:18.303153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.368 [2024-07-12 11:07:18.303304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.368 [2024-07-12 11:07:18.303310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.368 [2024-07-12 11:07:18.303315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.368 [2024-07-12 11:07:18.305750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.368 [2024-07-12 11:07:18.315231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.368 [2024-07-12 11:07:18.315764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.368 [2024-07-12 11:07:18.315775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.368 [2024-07-12 11:07:18.315781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.368 [2024-07-12 11:07:18.315931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.368 [2024-07-12 11:07:18.316082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.368 [2024-07-12 11:07:18.316090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.368 [2024-07-12 11:07:18.316095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.368 [2024-07-12 11:07:18.318534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.368 [2024-07-12 11:07:18.327871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.368 [2024-07-12 11:07:18.328512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.368 [2024-07-12 11:07:18.328543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.368 [2024-07-12 11:07:18.328551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.368 [2024-07-12 11:07:18.328719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.368 [2024-07-12 11:07:18.328874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.368 [2024-07-12 11:07:18.328880] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.368 [2024-07-12 11:07:18.328886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.368 [2024-07-12 11:07:18.331330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.368 [2024-07-12 11:07:18.340516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.368 [2024-07-12 11:07:18.341069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.368 [2024-07-12 11:07:18.341084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.368 [2024-07-12 11:07:18.341089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.368 [2024-07-12 11:07:18.341244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.368 [2024-07-12 11:07:18.341396] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.368 [2024-07-12 11:07:18.341402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.368 [2024-07-12 11:07:18.341407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.368 [2024-07-12 11:07:18.343844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.630 [2024-07-12 11:07:18.353190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.630 [2024-07-12 11:07:18.353740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-07-12 11:07:18.353752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.630 [2024-07-12 11:07:18.353758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.630 [2024-07-12 11:07:18.353908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.630 [2024-07-12 11:07:18.354059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.630 [2024-07-12 11:07:18.354065] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.630 [2024-07-12 11:07:18.354070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.630 [2024-07-12 11:07:18.356519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.630 [2024-07-12 11:07:18.365874] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.630 [2024-07-12 11:07:18.366408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-07-12 11:07:18.366437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.630 [2024-07-12 11:07:18.366446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.630 [2024-07-12 11:07:18.366613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.630 [2024-07-12 11:07:18.366768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.630 [2024-07-12 11:07:18.366774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.630 [2024-07-12 11:07:18.366780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.630 [2024-07-12 11:07:18.369226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.630 [2024-07-12 11:07:18.378559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.630 [2024-07-12 11:07:18.379178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-07-12 11:07:18.379207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.630 [2024-07-12 11:07:18.379216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.630 [2024-07-12 11:07:18.379386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.630 [2024-07-12 11:07:18.379540] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.630 [2024-07-12 11:07:18.379546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.630 [2024-07-12 11:07:18.379551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.630 [2024-07-12 11:07:18.381994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.630 [2024-07-12 11:07:18.391191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.630 [2024-07-12 11:07:18.391865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-07-12 11:07:18.391895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.630 [2024-07-12 11:07:18.391903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.630 [2024-07-12 11:07:18.392071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.630 [2024-07-12 11:07:18.392232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.630 [2024-07-12 11:07:18.392239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.630 [2024-07-12 11:07:18.392244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.630 [2024-07-12 11:07:18.394683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.631 [2024-07-12 11:07:18.403878] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.631 [2024-07-12 11:07:18.404502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-07-12 11:07:18.404532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.631 [2024-07-12 11:07:18.404540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.631 [2024-07-12 11:07:18.404711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.631 [2024-07-12 11:07:18.404865] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.631 [2024-07-12 11:07:18.404871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.631 [2024-07-12 11:07:18.404876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.631 [2024-07-12 11:07:18.407320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.631 [2024-07-12 11:07:18.416509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.631 [2024-07-12 11:07:18.417145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-07-12 11:07:18.417174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.631 [2024-07-12 11:07:18.417182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.631 [2024-07-12 11:07:18.417349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.631 [2024-07-12 11:07:18.417503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.631 [2024-07-12 11:07:18.417509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.631 [2024-07-12 11:07:18.417514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.631 [2024-07-12 11:07:18.419953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.631 [2024-07-12 11:07:18.429141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.631 [2024-07-12 11:07:18.429682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-07-12 11:07:18.429696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.631 [2024-07-12 11:07:18.429701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.631 [2024-07-12 11:07:18.429852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.631 [2024-07-12 11:07:18.430004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.631 [2024-07-12 11:07:18.430009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.631 [2024-07-12 11:07:18.430014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.631 [2024-07-12 11:07:18.432452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.631 [2024-07-12 11:07:18.441780] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.631 [2024-07-12 11:07:18.442285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-07-12 11:07:18.442298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.631 [2024-07-12 11:07:18.442303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.631 [2024-07-12 11:07:18.442454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.631 [2024-07-12 11:07:18.442605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.631 [2024-07-12 11:07:18.442610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.631 [2024-07-12 11:07:18.442619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.631 [2024-07-12 11:07:18.445054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.631 [2024-07-12 11:07:18.454528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.631 [2024-07-12 11:07:18.455082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-07-12 11:07:18.455093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.631 [2024-07-12 11:07:18.455098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.631 [2024-07-12 11:07:18.455253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.631 [2024-07-12 11:07:18.455405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.631 [2024-07-12 11:07:18.455411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.631 [2024-07-12 11:07:18.455416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.631 [2024-07-12 11:07:18.457848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.631 [2024-07-12 11:07:18.467202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.631 [2024-07-12 11:07:18.467693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-07-12 11:07:18.467704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.631 [2024-07-12 11:07:18.467709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.631 [2024-07-12 11:07:18.467860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.631 [2024-07-12 11:07:18.468011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.631 [2024-07-12 11:07:18.468017] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.631 [2024-07-12 11:07:18.468022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.631 [2024-07-12 11:07:18.470460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.631 [2024-07-12 11:07:18.479912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.631 [2024-07-12 11:07:18.480461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-07-12 11:07:18.480474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.631 [2024-07-12 11:07:18.480479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.631 [2024-07-12 11:07:18.480630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.631 [2024-07-12 11:07:18.480781] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.631 [2024-07-12 11:07:18.480787] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.631 [2024-07-12 11:07:18.480792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.631 [2024-07-12 11:07:18.483228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.631 [2024-07-12 11:07:18.492567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.631 [2024-07-12 11:07:18.493079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-07-12 11:07:18.493094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.631 [2024-07-12 11:07:18.493099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.631 [2024-07-12 11:07:18.493254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.631 [2024-07-12 11:07:18.493405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.631 [2024-07-12 11:07:18.493410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.631 [2024-07-12 11:07:18.493415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.631 [2024-07-12 11:07:18.495856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.631 [2024-07-12 11:07:18.505200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.631 [2024-07-12 11:07:18.505825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-07-12 11:07:18.505854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.631 [2024-07-12 11:07:18.505863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.631 [2024-07-12 11:07:18.506030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.631 [2024-07-12 11:07:18.506191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.631 [2024-07-12 11:07:18.506198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.631 [2024-07-12 11:07:18.506203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.631 [2024-07-12 11:07:18.508642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.631 [2024-07-12 11:07:18.517840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.631 [2024-07-12 11:07:18.518390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-07-12 11:07:18.518405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.631 [2024-07-12 11:07:18.518411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.631 [2024-07-12 11:07:18.518562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.631 [2024-07-12 11:07:18.518713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.631 [2024-07-12 11:07:18.518719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.631 [2024-07-12 11:07:18.518724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.631 [2024-07-12 11:07:18.521164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.631 [2024-07-12 11:07:18.530503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.631 [2024-07-12 11:07:18.531140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-07-12 11:07:18.531169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.631 [2024-07-12 11:07:18.531178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.631 [2024-07-12 11:07:18.531346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.631 [2024-07-12 11:07:18.531503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.631 [2024-07-12 11:07:18.531509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.631 [2024-07-12 11:07:18.531514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.631 [2024-07-12 11:07:18.533959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.632 [2024-07-12 11:07:18.543201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.632 [2024-07-12 11:07:18.543781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.632 [2024-07-12 11:07:18.543795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.632 [2024-07-12 11:07:18.543801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.632 [2024-07-12 11:07:18.543952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.632 [2024-07-12 11:07:18.544103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.632 [2024-07-12 11:07:18.544109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.632 [2024-07-12 11:07:18.544114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.632 [2024-07-12 11:07:18.546556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.632 [2024-07-12 11:07:18.555894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.632 [2024-07-12 11:07:18.556421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.632 [2024-07-12 11:07:18.556434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.632 [2024-07-12 11:07:18.556440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.632 [2024-07-12 11:07:18.556591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.632 [2024-07-12 11:07:18.556742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.632 [2024-07-12 11:07:18.556747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.632 [2024-07-12 11:07:18.556752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.632 [2024-07-12 11:07:18.559192] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.632 [2024-07-12 11:07:18.568539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.632 [2024-07-12 11:07:18.568931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.632 [2024-07-12 11:07:18.568943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.632 [2024-07-12 11:07:18.568948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.632 [2024-07-12 11:07:18.569099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.632 [2024-07-12 11:07:18.569254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.632 [2024-07-12 11:07:18.569260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.632 [2024-07-12 11:07:18.569265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.632 [2024-07-12 11:07:18.571705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.632 [2024-07-12 11:07:18.581193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.632 [2024-07-12 11:07:18.581798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.632 [2024-07-12 11:07:18.581827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.632 [2024-07-12 11:07:18.581835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.632 [2024-07-12 11:07:18.582002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.632 [2024-07-12 11:07:18.582164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.632 [2024-07-12 11:07:18.582171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.632 [2024-07-12 11:07:18.582176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.632 [2024-07-12 11:07:18.584616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.632 [2024-07-12 11:07:18.593815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.632 [2024-07-12 11:07:18.594456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.632 [2024-07-12 11:07:18.594486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.632 [2024-07-12 11:07:18.594494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.632 [2024-07-12 11:07:18.594661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.632 [2024-07-12 11:07:18.594815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.632 [2024-07-12 11:07:18.594821] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.632 [2024-07-12 11:07:18.594826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.632 [2024-07-12 11:07:18.597272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.632 [2024-07-12 11:07:18.606459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.632 [2024-07-12 11:07:18.607089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.632 [2024-07-12 11:07:18.607119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.632 [2024-07-12 11:07:18.607133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.632 [2024-07-12 11:07:18.607303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.632 [2024-07-12 11:07:18.607457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.632 [2024-07-12 11:07:18.607463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.632 [2024-07-12 11:07:18.607469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.632 [2024-07-12 11:07:18.609905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.894 [2024-07-12 11:07:18.619093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.894 [2024-07-12 11:07:18.619635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.894 [2024-07-12 11:07:18.619649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.894 [2024-07-12 11:07:18.619659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.894 [2024-07-12 11:07:18.619811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.894 [2024-07-12 11:07:18.619961] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.894 [2024-07-12 11:07:18.619967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.894 [2024-07-12 11:07:18.619972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.894 [2024-07-12 11:07:18.622411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.894 [2024-07-12 11:07:18.631741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.894 [2024-07-12 11:07:18.632287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.894 [2024-07-12 11:07:18.632300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.894 [2024-07-12 11:07:18.632305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.894 [2024-07-12 11:07:18.632456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.894 [2024-07-12 11:07:18.632607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.894 [2024-07-12 11:07:18.632612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.894 [2024-07-12 11:07:18.632617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.894 [2024-07-12 11:07:18.635050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.894 [2024-07-12 11:07:18.644389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.894 [2024-07-12 11:07:18.644926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.894 [2024-07-12 11:07:18.644937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.894 [2024-07-12 11:07:18.644942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.894 [2024-07-12 11:07:18.645093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.894 [2024-07-12 11:07:18.645249] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.894 [2024-07-12 11:07:18.645255] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.894 [2024-07-12 11:07:18.645260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.894 [2024-07-12 11:07:18.647693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.894 [2024-07-12 11:07:18.657028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.894 [2024-07-12 11:07:18.657615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.894 [2024-07-12 11:07:18.657627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.894 [2024-07-12 11:07:18.657632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.894 [2024-07-12 11:07:18.657783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.894 [2024-07-12 11:07:18.657933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.895 [2024-07-12 11:07:18.657942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.895 [2024-07-12 11:07:18.657946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.895 [2024-07-12 11:07:18.660382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.895 [2024-07-12 11:07:18.669718] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.895 [2024-07-12 11:07:18.670219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.895 [2024-07-12 11:07:18.670231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.895 [2024-07-12 11:07:18.670237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.895 [2024-07-12 11:07:18.670387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.895 [2024-07-12 11:07:18.670538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.895 [2024-07-12 11:07:18.670543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.895 [2024-07-12 11:07:18.670548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.895 [2024-07-12 11:07:18.672980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.895 [2024-07-12 11:07:18.682454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.895 [2024-07-12 11:07:18.682846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.895 [2024-07-12 11:07:18.682861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.895 [2024-07-12 11:07:18.682866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.895 [2024-07-12 11:07:18.683019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.895 [2024-07-12 11:07:18.683180] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.895 [2024-07-12 11:07:18.683187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.895 [2024-07-12 11:07:18.683192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.895 [2024-07-12 11:07:18.685627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.895 [2024-07-12 11:07:18.695089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.895 [2024-07-12 11:07:18.695564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.895 [2024-07-12 11:07:18.695592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.895 [2024-07-12 11:07:18.695601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.895 [2024-07-12 11:07:18.695768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.895 [2024-07-12 11:07:18.695922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.895 [2024-07-12 11:07:18.695928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.895 [2024-07-12 11:07:18.695933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.895 [2024-07-12 11:07:18.698382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.895 [2024-07-12 11:07:18.707718] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.895 [2024-07-12 11:07:18.708265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.895 [2024-07-12 11:07:18.708278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.895 [2024-07-12 11:07:18.708284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.895 [2024-07-12 11:07:18.708435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.895 [2024-07-12 11:07:18.708586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.895 [2024-07-12 11:07:18.708592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.895 [2024-07-12 11:07:18.708596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.895 [2024-07-12 11:07:18.711029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.895 [2024-07-12 11:07:18.720353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.895 [2024-07-12 11:07:18.720865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.895 [2024-07-12 11:07:18.720876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.895 [2024-07-12 11:07:18.720881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.895 [2024-07-12 11:07:18.721032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.895 [2024-07-12 11:07:18.721188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.895 [2024-07-12 11:07:18.721194] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.895 [2024-07-12 11:07:18.721199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.895 [2024-07-12 11:07:18.723630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.895 [2024-07-12 11:07:18.733091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.895 [2024-07-12 11:07:18.733710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.895 [2024-07-12 11:07:18.733739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.895 [2024-07-12 11:07:18.733748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.895 [2024-07-12 11:07:18.733915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.895 [2024-07-12 11:07:18.734069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.895 [2024-07-12 11:07:18.734075] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.895 [2024-07-12 11:07:18.734080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.895 [2024-07-12 11:07:18.736526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.895 [2024-07-12 11:07:18.745709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.895 [2024-07-12 11:07:18.746334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.895 [2024-07-12 11:07:18.746363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.895 [2024-07-12 11:07:18.746372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.895 [2024-07-12 11:07:18.746542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.895 [2024-07-12 11:07:18.746696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.895 [2024-07-12 11:07:18.746702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.895 [2024-07-12 11:07:18.746708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.895 [2024-07-12 11:07:18.749153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.895 [2024-07-12 11:07:18.758342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.895 [2024-07-12 11:07:18.758985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.895 [2024-07-12 11:07:18.759014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.895 [2024-07-12 11:07:18.759022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.895 [2024-07-12 11:07:18.759197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.895 [2024-07-12 11:07:18.759352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.895 [2024-07-12 11:07:18.759357] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.895 [2024-07-12 11:07:18.759363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.895 [2024-07-12 11:07:18.761801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.895 [2024-07-12 11:07:18.770986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.895 [2024-07-12 11:07:18.771587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.895 [2024-07-12 11:07:18.771617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.895 [2024-07-12 11:07:18.771626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.895 [2024-07-12 11:07:18.771793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.895 [2024-07-12 11:07:18.771946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.895 [2024-07-12 11:07:18.771952] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.895 [2024-07-12 11:07:18.771957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.895 [2024-07-12 11:07:18.774401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.895 [2024-07-12 11:07:18.783723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.895 [2024-07-12 11:07:18.784368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.895 [2024-07-12 11:07:18.784398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.895 [2024-07-12 11:07:18.784406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.895 [2024-07-12 11:07:18.784573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.895 [2024-07-12 11:07:18.784727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.895 [2024-07-12 11:07:18.784733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.895 [2024-07-12 11:07:18.784742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.895 [2024-07-12 11:07:18.787189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.895 [2024-07-12 11:07:18.796376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.895 [2024-07-12 11:07:18.796829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.895 [2024-07-12 11:07:18.796857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.896 [2024-07-12 11:07:18.796866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.896 [2024-07-12 11:07:18.797036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.896 [2024-07-12 11:07:18.797197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.896 [2024-07-12 11:07:18.797203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.896 [2024-07-12 11:07:18.797209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.896 [2024-07-12 11:07:18.799647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.896 [2024-07-12 11:07:18.809114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.896 [2024-07-12 11:07:18.809728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.896 [2024-07-12 11:07:18.809757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.896 [2024-07-12 11:07:18.809765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.896 [2024-07-12 11:07:18.809932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.896 [2024-07-12 11:07:18.810085] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.896 [2024-07-12 11:07:18.810091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.896 [2024-07-12 11:07:18.810097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.896 [2024-07-12 11:07:18.812541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.896 [2024-07-12 11:07:18.821861] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.896 [2024-07-12 11:07:18.822476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.896 [2024-07-12 11:07:18.822506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.896 [2024-07-12 11:07:18.822514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.896 [2024-07-12 11:07:18.822681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.896 [2024-07-12 11:07:18.822835] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.896 [2024-07-12 11:07:18.822841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.896 [2024-07-12 11:07:18.822846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.896 [2024-07-12 11:07:18.825288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.896 [2024-07-12 11:07:18.834610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.896 [2024-07-12 11:07:18.835232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.896 [2024-07-12 11:07:18.835265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.896 [2024-07-12 11:07:18.835273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.896 [2024-07-12 11:07:18.835440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.896 [2024-07-12 11:07:18.835593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.896 [2024-07-12 11:07:18.835599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.896 [2024-07-12 11:07:18.835605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.896 [2024-07-12 11:07:18.838050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.896 [2024-07-12 11:07:18.847233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.896 [2024-07-12 11:07:18.847856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.896 [2024-07-12 11:07:18.847886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.896 [2024-07-12 11:07:18.847895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.896 [2024-07-12 11:07:18.848062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.896 [2024-07-12 11:07:18.848223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.896 [2024-07-12 11:07:18.848230] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.896 [2024-07-12 11:07:18.848235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.896 [2024-07-12 11:07:18.850672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.896 [2024-07-12 11:07:18.859853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.896 [2024-07-12 11:07:18.860490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.896 [2024-07-12 11:07:18.860519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.896 [2024-07-12 11:07:18.860527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.896 [2024-07-12 11:07:18.860695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.896 [2024-07-12 11:07:18.860848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.896 [2024-07-12 11:07:18.860854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.896 [2024-07-12 11:07:18.860860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.896 [2024-07-12 11:07:18.863304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.896 [2024-07-12 11:07:18.872500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.896 [2024-07-12 11:07:18.873091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.896 [2024-07-12 11:07:18.873120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:01.896 [2024-07-12 11:07:18.873136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:01.896 [2024-07-12 11:07:18.873303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:01.896 [2024-07-12 11:07:18.873461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.896 [2024-07-12 11:07:18.873467] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.896 [2024-07-12 11:07:18.873473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.896 [2024-07-12 11:07:18.875912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.158 [2024-07-12 11:07:18.885246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.158 [2024-07-12 11:07:18.885867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.158 [2024-07-12 11:07:18.885896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.158 [2024-07-12 11:07:18.885905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.158 [2024-07-12 11:07:18.886072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.158 [2024-07-12 11:07:18.886233] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.158 [2024-07-12 11:07:18.886240] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.158 [2024-07-12 11:07:18.886245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.158 [2024-07-12 11:07:18.888686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.158 [2024-07-12 11:07:18.897866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.158 [2024-07-12 11:07:18.898455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.158 [2024-07-12 11:07:18.898470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.158 [2024-07-12 11:07:18.898475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.158 [2024-07-12 11:07:18.898626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.158 [2024-07-12 11:07:18.898777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.158 [2024-07-12 11:07:18.898783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.158 [2024-07-12 11:07:18.898787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.158 [2024-07-12 11:07:18.901221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.158 [2024-07-12 11:07:18.910540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.158 [2024-07-12 11:07:18.911165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.158 [2024-07-12 11:07:18.911202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.158 [2024-07-12 11:07:18.911210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.158 [2024-07-12 11:07:18.911377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.158 [2024-07-12 11:07:18.911530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.158 [2024-07-12 11:07:18.911536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.158 [2024-07-12 11:07:18.911541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.158 [2024-07-12 11:07:18.913988] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.158 [2024-07-12 11:07:18.923175] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.158 [2024-07-12 11:07:18.923798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.158 [2024-07-12 11:07:18.923827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.158 [2024-07-12 11:07:18.923835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.159 [2024-07-12 11:07:18.924003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.159 [2024-07-12 11:07:18.924163] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.159 [2024-07-12 11:07:18.924170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.159 [2024-07-12 11:07:18.924175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.159 [2024-07-12 11:07:18.926612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.159 [2024-07-12 11:07:18.935802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.159 [2024-07-12 11:07:18.936437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.159 [2024-07-12 11:07:18.936466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.159 [2024-07-12 11:07:18.936475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.159 [2024-07-12 11:07:18.936644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.159 [2024-07-12 11:07:18.936797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.159 [2024-07-12 11:07:18.936803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.159 [2024-07-12 11:07:18.936809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.159 [2024-07-12 11:07:18.939254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.159 [2024-07-12 11:07:18.948435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.159 [2024-07-12 11:07:18.949079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.159 [2024-07-12 11:07:18.949108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.159 [2024-07-12 11:07:18.949117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.159 [2024-07-12 11:07:18.949293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.159 [2024-07-12 11:07:18.949447] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.159 [2024-07-12 11:07:18.949454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.159 [2024-07-12 11:07:18.949459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.159 [2024-07-12 11:07:18.951898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.159 [2024-07-12 11:07:18.961097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.159 [2024-07-12 11:07:18.961744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.159 [2024-07-12 11:07:18.961774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.159 [2024-07-12 11:07:18.961786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.159 [2024-07-12 11:07:18.961953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.159 [2024-07-12 11:07:18.962106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.159 [2024-07-12 11:07:18.962112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.159 [2024-07-12 11:07:18.962118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.159 [2024-07-12 11:07:18.964562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.159 [2024-07-12 11:07:18.973754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.159 [2024-07-12 11:07:18.974425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.159 [2024-07-12 11:07:18.974454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.159 [2024-07-12 11:07:18.974463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.159 [2024-07-12 11:07:18.974630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.159 [2024-07-12 11:07:18.974783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.159 [2024-07-12 11:07:18.974789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.159 [2024-07-12 11:07:18.974795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.159 [2024-07-12 11:07:18.977241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.159 [2024-07-12 11:07:18.986419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.159 [2024-07-12 11:07:18.987046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.159 [2024-07-12 11:07:18.987075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.159 [2024-07-12 11:07:18.987084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.159 [2024-07-12 11:07:18.987259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.159 [2024-07-12 11:07:18.987413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.159 [2024-07-12 11:07:18.987419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.159 [2024-07-12 11:07:18.987424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.159 [2024-07-12 11:07:18.989860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.159 [2024-07-12 11:07:18.999151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.159 [2024-07-12 11:07:18.999841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.159 [2024-07-12 11:07:18.999870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.159 [2024-07-12 11:07:18.999879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.159 [2024-07-12 11:07:19.000046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.159 [2024-07-12 11:07:19.000208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.159 [2024-07-12 11:07:19.000221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.159 [2024-07-12 11:07:19.000227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.159 [2024-07-12 11:07:19.002665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.159 [2024-07-12 11:07:19.011844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.159 [2024-07-12 11:07:19.012490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.159 [2024-07-12 11:07:19.012519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.159 [2024-07-12 11:07:19.012528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.159 [2024-07-12 11:07:19.012695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.159 [2024-07-12 11:07:19.012849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.159 [2024-07-12 11:07:19.012855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.159 [2024-07-12 11:07:19.012860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.159 [2024-07-12 11:07:19.015303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.159 [2024-07-12 11:07:19.024502] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.159 [2024-07-12 11:07:19.025130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.159 [2024-07-12 11:07:19.025159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.159 [2024-07-12 11:07:19.025168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.159 [2024-07-12 11:07:19.025337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.159 [2024-07-12 11:07:19.025491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.159 [2024-07-12 11:07:19.025497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.159 [2024-07-12 11:07:19.025502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.159 [2024-07-12 11:07:19.027942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.159 [2024-07-12 11:07:19.037129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.159 [2024-07-12 11:07:19.037743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.159 [2024-07-12 11:07:19.037772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.159 [2024-07-12 11:07:19.037781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.159 [2024-07-12 11:07:19.037948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.159 [2024-07-12 11:07:19.038101] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.159 [2024-07-12 11:07:19.038107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.159 [2024-07-12 11:07:19.038113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.159 [2024-07-12 11:07:19.040559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.159 [2024-07-12 11:07:19.049743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.159 [2024-07-12 11:07:19.050289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.159 [2024-07-12 11:07:19.050303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.159 [2024-07-12 11:07:19.050309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.159 [2024-07-12 11:07:19.050460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.159 [2024-07-12 11:07:19.050611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.159 [2024-07-12 11:07:19.050617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.159 [2024-07-12 11:07:19.050622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.159 [2024-07-12 11:07:19.053054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.159 [2024-07-12 11:07:19.062376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.159 [2024-07-12 11:07:19.063003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.159 [2024-07-12 11:07:19.063032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.160 [2024-07-12 11:07:19.063041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.160 [2024-07-12 11:07:19.063216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.160 [2024-07-12 11:07:19.063370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.160 [2024-07-12 11:07:19.063376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.160 [2024-07-12 11:07:19.063381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.160 [2024-07-12 11:07:19.065827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.160 [2024-07-12 11:07:19.075008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.160 [2024-07-12 11:07:19.075578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.160 [2024-07-12 11:07:19.075592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.160 [2024-07-12 11:07:19.075598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.160 [2024-07-12 11:07:19.075749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.160 [2024-07-12 11:07:19.075901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.160 [2024-07-12 11:07:19.075906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.160 [2024-07-12 11:07:19.075911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.160 [2024-07-12 11:07:19.078346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.160 [2024-07-12 11:07:19.087664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.160 [2024-07-12 11:07:19.088267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.160 [2024-07-12 11:07:19.088297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.160 [2024-07-12 11:07:19.088306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.160 [2024-07-12 11:07:19.088477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.160 [2024-07-12 11:07:19.088631] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.160 [2024-07-12 11:07:19.088637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.160 [2024-07-12 11:07:19.088642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.160 [2024-07-12 11:07:19.091085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.160 [2024-07-12 11:07:19.100273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.160 [2024-07-12 11:07:19.100683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.160 [2024-07-12 11:07:19.100697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.160 [2024-07-12 11:07:19.100703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.160 [2024-07-12 11:07:19.100854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.160 [2024-07-12 11:07:19.101005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.160 [2024-07-12 11:07:19.101011] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.160 [2024-07-12 11:07:19.101016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.160 [2024-07-12 11:07:19.103452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.160 [2024-07-12 11:07:19.112939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.160 [2024-07-12 11:07:19.113545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.160 [2024-07-12 11:07:19.113574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.160 [2024-07-12 11:07:19.113583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.160 [2024-07-12 11:07:19.113751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.160 [2024-07-12 11:07:19.113904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.160 [2024-07-12 11:07:19.113910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.160 [2024-07-12 11:07:19.113915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.160 [2024-07-12 11:07:19.116361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.160 [2024-07-12 11:07:19.125688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.160 [2024-07-12 11:07:19.126318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.160 [2024-07-12 11:07:19.126348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.160 [2024-07-12 11:07:19.126356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.160 [2024-07-12 11:07:19.126524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.160 [2024-07-12 11:07:19.126677] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.160 [2024-07-12 11:07:19.126683] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.160 [2024-07-12 11:07:19.126692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.160 [2024-07-12 11:07:19.129139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.160 [2024-07-12 11:07:19.138326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.160 [2024-07-12 11:07:19.138949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.160 [2024-07-12 11:07:19.138978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.160 [2024-07-12 11:07:19.138987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.160 [2024-07-12 11:07:19.139162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.160 [2024-07-12 11:07:19.139317] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.160 [2024-07-12 11:07:19.139323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.160 [2024-07-12 11:07:19.139328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.422 [2024-07-12 11:07:19.141768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.422 [2024-07-12 11:07:19.150955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.422 [2024-07-12 11:07:19.151595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.422 [2024-07-12 11:07:19.151625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.422 [2024-07-12 11:07:19.151633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.422 [2024-07-12 11:07:19.151801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.422 [2024-07-12 11:07:19.151954] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.422 [2024-07-12 11:07:19.151960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.422 [2024-07-12 11:07:19.151966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.422 [2024-07-12 11:07:19.154414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.422 [2024-07-12 11:07:19.163601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.422 [2024-07-12 11:07:19.164226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.422 [2024-07-12 11:07:19.164256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.422 [2024-07-12 11:07:19.164265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.422 [2024-07-12 11:07:19.164432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.422 [2024-07-12 11:07:19.164586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.422 [2024-07-12 11:07:19.164592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.422 [2024-07-12 11:07:19.164597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.422 [2024-07-12 11:07:19.167052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.422 [2024-07-12 11:07:19.176239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.422 [2024-07-12 11:07:19.176861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.422 [2024-07-12 11:07:19.176894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.423 [2024-07-12 11:07:19.176902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.423 [2024-07-12 11:07:19.177069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.423 [2024-07-12 11:07:19.177230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.423 [2024-07-12 11:07:19.177236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.423 [2024-07-12 11:07:19.177242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.423 [2024-07-12 11:07:19.179679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.423 [2024-07-12 11:07:19.188862] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.423 [2024-07-12 11:07:19.189452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.423 [2024-07-12 11:07:19.189482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.423 [2024-07-12 11:07:19.189490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.423 [2024-07-12 11:07:19.189657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.423 [2024-07-12 11:07:19.189810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.423 [2024-07-12 11:07:19.189816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.423 [2024-07-12 11:07:19.189822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.423 [2024-07-12 11:07:19.192267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.423 [2024-07-12 11:07:19.201588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.423 [2024-07-12 11:07:19.202227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.423 [2024-07-12 11:07:19.202256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.423 [2024-07-12 11:07:19.202264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.423 [2024-07-12 11:07:19.202432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.423 [2024-07-12 11:07:19.202585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.423 [2024-07-12 11:07:19.202591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.423 [2024-07-12 11:07:19.202597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.423 [2024-07-12 11:07:19.205041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.423 [2024-07-12 11:07:19.214236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.423 [2024-07-12 11:07:19.214770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.423 [2024-07-12 11:07:19.214784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.423 [2024-07-12 11:07:19.214789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.423 [2024-07-12 11:07:19.214940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.423 [2024-07-12 11:07:19.215095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.423 [2024-07-12 11:07:19.215101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.423 [2024-07-12 11:07:19.215106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.423 [2024-07-12 11:07:19.217546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.423 [2024-07-12 11:07:19.226866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.423 [2024-07-12 11:07:19.227536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.423 [2024-07-12 11:07:19.227566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.423 [2024-07-12 11:07:19.227574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.423 [2024-07-12 11:07:19.227741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.423 [2024-07-12 11:07:19.227895] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.423 [2024-07-12 11:07:19.227901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.423 [2024-07-12 11:07:19.227906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.423 [2024-07-12 11:07:19.230348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.423 [2024-07-12 11:07:19.239533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.423 [2024-07-12 11:07:19.240168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.423 [2024-07-12 11:07:19.240197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.423 [2024-07-12 11:07:19.240206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.423 [2024-07-12 11:07:19.240375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.423 [2024-07-12 11:07:19.240529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.423 [2024-07-12 11:07:19.240535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.423 [2024-07-12 11:07:19.240541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.423 [2024-07-12 11:07:19.242985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.423 [2024-07-12 11:07:19.252170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.423 [2024-07-12 11:07:19.252707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.423 [2024-07-12 11:07:19.252736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.423 [2024-07-12 11:07:19.252745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.423 [2024-07-12 11:07:19.252912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.423 [2024-07-12 11:07:19.253066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.423 [2024-07-12 11:07:19.253072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.423 [2024-07-12 11:07:19.253077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.423 [2024-07-12 11:07:19.255524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.423 [2024-07-12 11:07:19.264849] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.423 [2024-07-12 11:07:19.265503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.423 [2024-07-12 11:07:19.265533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.423 [2024-07-12 11:07:19.265541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.423 [2024-07-12 11:07:19.265708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.423 [2024-07-12 11:07:19.265862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.423 [2024-07-12 11:07:19.265868] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.423 [2024-07-12 11:07:19.265873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.423 [2024-07-12 11:07:19.268324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.423 [2024-07-12 11:07:19.277505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.423 [2024-07-12 11:07:19.278139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.423 [2024-07-12 11:07:19.278168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.423 [2024-07-12 11:07:19.278177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.423 [2024-07-12 11:07:19.278344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.423 [2024-07-12 11:07:19.278498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.423 [2024-07-12 11:07:19.278504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.423 [2024-07-12 11:07:19.278509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.423 [2024-07-12 11:07:19.280951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.423 [2024-07-12 11:07:19.290133] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.423 [2024-07-12 11:07:19.290729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.423 [2024-07-12 11:07:19.290759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.423 [2024-07-12 11:07:19.290768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.423 [2024-07-12 11:07:19.290934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.424 [2024-07-12 11:07:19.291088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.424 [2024-07-12 11:07:19.291094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.424 [2024-07-12 11:07:19.291099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.424 [2024-07-12 11:07:19.293544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.424 [2024-07-12 11:07:19.302872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.424 [2024-07-12 11:07:19.303510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.424 [2024-07-12 11:07:19.303539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.424 [2024-07-12 11:07:19.303551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.424 [2024-07-12 11:07:19.303718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.424 [2024-07-12 11:07:19.303871] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.424 [2024-07-12 11:07:19.303877] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.424 [2024-07-12 11:07:19.303883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.424 [2024-07-12 11:07:19.306325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.424 [2024-07-12 11:07:19.315509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.424 [2024-07-12 11:07:19.316045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.424 [2024-07-12 11:07:19.316059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.424 [2024-07-12 11:07:19.316064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.424 [2024-07-12 11:07:19.316220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.424 [2024-07-12 11:07:19.316372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.424 [2024-07-12 11:07:19.316378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.424 [2024-07-12 11:07:19.316383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.424 [2024-07-12 11:07:19.318815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.424 [2024-07-12 11:07:19.328144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.424 [2024-07-12 11:07:19.328750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.424 [2024-07-12 11:07:19.328780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.424 [2024-07-12 11:07:19.328789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.424 [2024-07-12 11:07:19.328958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.424 [2024-07-12 11:07:19.329112] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.424 [2024-07-12 11:07:19.329118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.424 [2024-07-12 11:07:19.329130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.424 [2024-07-12 11:07:19.331571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.424 [2024-07-12 11:07:19.340895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.424 [2024-07-12 11:07:19.341497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.424 [2024-07-12 11:07:19.341527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.424 [2024-07-12 11:07:19.341535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.424 [2024-07-12 11:07:19.341703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.424 [2024-07-12 11:07:19.341857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.424 [2024-07-12 11:07:19.341866] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.424 [2024-07-12 11:07:19.341871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.424 [2024-07-12 11:07:19.344314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.424 [2024-07-12 11:07:19.353648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.424 [2024-07-12 11:07:19.354225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.424 [2024-07-12 11:07:19.354255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.424 [2024-07-12 11:07:19.354264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.424 [2024-07-12 11:07:19.354434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.424 [2024-07-12 11:07:19.354587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.424 [2024-07-12 11:07:19.354593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.424 [2024-07-12 11:07:19.354598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.424 [2024-07-12 11:07:19.357048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.424 [2024-07-12 11:07:19.366383] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.424 [2024-07-12 11:07:19.366969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.424 [2024-07-12 11:07:19.366998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.424 [2024-07-12 11:07:19.367007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.424 [2024-07-12 11:07:19.367188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.424 [2024-07-12 11:07:19.367343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.424 [2024-07-12 11:07:19.367349] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.424 [2024-07-12 11:07:19.367354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.424 [2024-07-12 11:07:19.369792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.424 [2024-07-12 11:07:19.379118] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.424 [2024-07-12 11:07:19.379741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.424 [2024-07-12 11:07:19.379770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.424 [2024-07-12 11:07:19.379779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.424 [2024-07-12 11:07:19.379946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.424 [2024-07-12 11:07:19.380100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.424 [2024-07-12 11:07:19.380106] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.424 [2024-07-12 11:07:19.380111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.424 [2024-07-12 11:07:19.382555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.424 [2024-07-12 11:07:19.391743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.424 [2024-07-12 11:07:19.392409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.424 [2024-07-12 11:07:19.392438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.424 [2024-07-12 11:07:19.392447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.424 [2024-07-12 11:07:19.392614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.424 [2024-07-12 11:07:19.392768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.424 [2024-07-12 11:07:19.392774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.424 [2024-07-12 11:07:19.392779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.424 [2024-07-12 11:07:19.395230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.424 [2024-07-12 11:07:19.404421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.687 [2024-07-12 11:07:19.405043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.687 [2024-07-12 11:07:19.405073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.687 [2024-07-12 11:07:19.405082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.687 [2024-07-12 11:07:19.405259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.687 [2024-07-12 11:07:19.405413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.687 [2024-07-12 11:07:19.405420] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.687 [2024-07-12 11:07:19.405426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.687 [2024-07-12 11:07:19.407864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.687 [2024-07-12 11:07:19.417048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.687 [2024-07-12 11:07:19.417653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.687 [2024-07-12 11:07:19.417682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.687 [2024-07-12 11:07:19.417691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.687 [2024-07-12 11:07:19.417858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.687 [2024-07-12 11:07:19.418012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.687 [2024-07-12 11:07:19.418018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.687 [2024-07-12 11:07:19.418023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.687 [2024-07-12 11:07:19.420469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.687 [2024-07-12 11:07:19.429794] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.687 [2024-07-12 11:07:19.430326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.687 [2024-07-12 11:07:19.430341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.687 [2024-07-12 11:07:19.430346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.687 [2024-07-12 11:07:19.430501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.687 [2024-07-12 11:07:19.430652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.687 [2024-07-12 11:07:19.430658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.687 [2024-07-12 11:07:19.430662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.687 [2024-07-12 11:07:19.433155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.687 [2024-07-12 11:07:19.442487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.687 [2024-07-12 11:07:19.443117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.687 [2024-07-12 11:07:19.443152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.687 [2024-07-12 11:07:19.443160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.687 [2024-07-12 11:07:19.443327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.687 [2024-07-12 11:07:19.443480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.687 [2024-07-12 11:07:19.443486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.687 [2024-07-12 11:07:19.443492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.687 [2024-07-12 11:07:19.445934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.687 [2024-07-12 11:07:19.455117] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.687 [2024-07-12 11:07:19.455724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.687 [2024-07-12 11:07:19.455754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.687 [2024-07-12 11:07:19.455762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.687 [2024-07-12 11:07:19.455929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.687 [2024-07-12 11:07:19.456083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.687 [2024-07-12 11:07:19.456089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.687 [2024-07-12 11:07:19.456094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.687 [2024-07-12 11:07:19.458538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.687 [2024-07-12 11:07:19.467731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.687 [2024-07-12 11:07:19.468410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.687 [2024-07-12 11:07:19.468440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.687 [2024-07-12 11:07:19.468448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.687 [2024-07-12 11:07:19.468616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.687 [2024-07-12 11:07:19.468769] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.687 [2024-07-12 11:07:19.468776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.687 [2024-07-12 11:07:19.468784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.687 [2024-07-12 11:07:19.471395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.687 [2024-07-12 11:07:19.480447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.687 [2024-07-12 11:07:19.480983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.687 [2024-07-12 11:07:19.480998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.687 [2024-07-12 11:07:19.481004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.687 [2024-07-12 11:07:19.481161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.687 [2024-07-12 11:07:19.481313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.687 [2024-07-12 11:07:19.481319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.687 [2024-07-12 11:07:19.481324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.688 [2024-07-12 11:07:19.483756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.688 [2024-07-12 11:07:19.493076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.688 [2024-07-12 11:07:19.493692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.688 [2024-07-12 11:07:19.493722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.688 [2024-07-12 11:07:19.493731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.688 [2024-07-12 11:07:19.493899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.688 [2024-07-12 11:07:19.494053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.688 [2024-07-12 11:07:19.494059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.688 [2024-07-12 11:07:19.494064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.688 [2024-07-12 11:07:19.496506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.688 [2024-07-12 11:07:19.505694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.688 [2024-07-12 11:07:19.506400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.688 [2024-07-12 11:07:19.506430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.688 [2024-07-12 11:07:19.506439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.688 [2024-07-12 11:07:19.506607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.688 [2024-07-12 11:07:19.506760] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.688 [2024-07-12 11:07:19.506766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.688 [2024-07-12 11:07:19.506771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.688 [2024-07-12 11:07:19.509214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.688 [2024-07-12 11:07:19.518400] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.688 [2024-07-12 11:07:19.518906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.688 [2024-07-12 11:07:19.518923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.688 [2024-07-12 11:07:19.518929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.688 [2024-07-12 11:07:19.519080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.688 [2024-07-12 11:07:19.519237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.688 [2024-07-12 11:07:19.519243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.688 [2024-07-12 11:07:19.519248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.688 [2024-07-12 11:07:19.521682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.688 [2024-07-12 11:07:19.531007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.688 [2024-07-12 11:07:19.531613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.688 [2024-07-12 11:07:19.531642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.688 [2024-07-12 11:07:19.531651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.688 [2024-07-12 11:07:19.531818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.688 [2024-07-12 11:07:19.531972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.688 [2024-07-12 11:07:19.531979] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.688 [2024-07-12 11:07:19.531984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.688 [2024-07-12 11:07:19.534426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.688 [2024-07-12 11:07:19.543755] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.688 [2024-07-12 11:07:19.544433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.688 [2024-07-12 11:07:19.544462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.688 [2024-07-12 11:07:19.544471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.688 [2024-07-12 11:07:19.544638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.688 [2024-07-12 11:07:19.544791] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.688 [2024-07-12 11:07:19.544797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.688 [2024-07-12 11:07:19.544802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.688 [2024-07-12 11:07:19.547246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.688 [2024-07-12 11:07:19.556426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.688 [2024-07-12 11:07:19.557048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.688 [2024-07-12 11:07:19.557077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.688 [2024-07-12 11:07:19.557086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.688 [2024-07-12 11:07:19.557261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.688 [2024-07-12 11:07:19.557419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.688 [2024-07-12 11:07:19.557426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.688 [2024-07-12 11:07:19.557431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.688 [2024-07-12 11:07:19.559867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.688 [2024-07-12 11:07:19.569049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.688 [2024-07-12 11:07:19.569706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.688 [2024-07-12 11:07:19.569736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.688 [2024-07-12 11:07:19.569745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.688 [2024-07-12 11:07:19.569912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.688 [2024-07-12 11:07:19.570065] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.688 [2024-07-12 11:07:19.570071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.688 [2024-07-12 11:07:19.570077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.688 [2024-07-12 11:07:19.572519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.688 [2024-07-12 11:07:19.581702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.688 [2024-07-12 11:07:19.582350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.688 [2024-07-12 11:07:19.582380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.688 [2024-07-12 11:07:19.582388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.688 [2024-07-12 11:07:19.582558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.688 [2024-07-12 11:07:19.582712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.688 [2024-07-12 11:07:19.582718] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.688 [2024-07-12 11:07:19.582724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.688 [2024-07-12 11:07:19.585166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.688 [2024-07-12 11:07:19.594350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.688 [2024-07-12 11:07:19.594953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.688 [2024-07-12 11:07:19.594982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.688 [2024-07-12 11:07:19.594991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.688 [2024-07-12 11:07:19.595165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.688 [2024-07-12 11:07:19.595319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.688 [2024-07-12 11:07:19.595325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.688 [2024-07-12 11:07:19.595331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.688 [2024-07-12 11:07:19.597773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.688 [2024-07-12 11:07:19.607094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.688 [2024-07-12 11:07:19.607712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.688 [2024-07-12 11:07:19.607741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.688 [2024-07-12 11:07:19.607750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.688 [2024-07-12 11:07:19.607917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.688 [2024-07-12 11:07:19.608070] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.688 [2024-07-12 11:07:19.608076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.688 [2024-07-12 11:07:19.608082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.688 [2024-07-12 11:07:19.610525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.688 [2024-07-12 11:07:19.619717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.688 [2024-07-12 11:07:19.620401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.688 [2024-07-12 11:07:19.620430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.688 [2024-07-12 11:07:19.620439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.688 [2024-07-12 11:07:19.620606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.688 [2024-07-12 11:07:19.620760] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.688 [2024-07-12 11:07:19.620766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.689 [2024-07-12 11:07:19.620771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.689 [2024-07-12 11:07:19.623215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.689 [2024-07-12 11:07:19.632441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.689 [2024-07-12 11:07:19.633065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.689 [2024-07-12 11:07:19.633094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.689 [2024-07-12 11:07:19.633103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.689 [2024-07-12 11:07:19.633279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.689 [2024-07-12 11:07:19.633434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.689 [2024-07-12 11:07:19.633440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.689 [2024-07-12 11:07:19.633446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.689 [2024-07-12 11:07:19.635882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.689 [2024-07-12 11:07:19.645068] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.689 [2024-07-12 11:07:19.645671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.689 [2024-07-12 11:07:19.645701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.689 [2024-07-12 11:07:19.645715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.689 [2024-07-12 11:07:19.645882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.689 [2024-07-12 11:07:19.646036] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.689 [2024-07-12 11:07:19.646043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.689 [2024-07-12 11:07:19.646049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.689 [2024-07-12 11:07:19.648492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.689 [2024-07-12 11:07:19.657682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.689 [2024-07-12 11:07:19.658342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.689 [2024-07-12 11:07:19.658372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.689 [2024-07-12 11:07:19.658381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.689 [2024-07-12 11:07:19.658550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.689 [2024-07-12 11:07:19.658704] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.689 [2024-07-12 11:07:19.658713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.689 [2024-07-12 11:07:19.658720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.689 [2024-07-12 11:07:19.661165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.951 [2024-07-12 11:07:19.670368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.951 [2024-07-12 11:07:19.670775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.951 [2024-07-12 11:07:19.670790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.951 [2024-07-12 11:07:19.670795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.951 [2024-07-12 11:07:19.670947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.951 [2024-07-12 11:07:19.671098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.951 [2024-07-12 11:07:19.671105] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.951 [2024-07-12 11:07:19.671110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.951 [2024-07-12 11:07:19.673549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.951 [2024-07-12 11:07:19.683024] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.951 [2024-07-12 11:07:19.683612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.951 [2024-07-12 11:07:19.683624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.951 [2024-07-12 11:07:19.683629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.951 [2024-07-12 11:07:19.683780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.951 [2024-07-12 11:07:19.683931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.951 [2024-07-12 11:07:19.683940] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.951 [2024-07-12 11:07:19.683945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.951 [2024-07-12 11:07:19.686380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.951 [2024-07-12 11:07:19.695715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.951 [2024-07-12 11:07:19.696263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.951 [2024-07-12 11:07:19.696275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.952 [2024-07-12 11:07:19.696280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.952 [2024-07-12 11:07:19.696431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.952 [2024-07-12 11:07:19.696582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.952 [2024-07-12 11:07:19.696588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.952 [2024-07-12 11:07:19.696593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.952 [2024-07-12 11:07:19.699027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.952 [2024-07-12 11:07:19.708358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.952 [2024-07-12 11:07:19.708983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.952 [2024-07-12 11:07:19.709013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.952 [2024-07-12 11:07:19.709021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.952 [2024-07-12 11:07:19.709194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.952 [2024-07-12 11:07:19.709348] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.952 [2024-07-12 11:07:19.709354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.952 [2024-07-12 11:07:19.709359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.952 [2024-07-12 11:07:19.711798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.952 [2024-07-12 11:07:19.721007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.952 [2024-07-12 11:07:19.721455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.952 [2024-07-12 11:07:19.721484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.952 [2024-07-12 11:07:19.721493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.952 [2024-07-12 11:07:19.721662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.952 [2024-07-12 11:07:19.721821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.952 [2024-07-12 11:07:19.721830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.952 [2024-07-12 11:07:19.721835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.952 [2024-07-12 11:07:19.724288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.952 [2024-07-12 11:07:19.733623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.952 [2024-07-12 11:07:19.734159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.952 [2024-07-12 11:07:19.734173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.952 [2024-07-12 11:07:19.734179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.952 [2024-07-12 11:07:19.734330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.952 [2024-07-12 11:07:19.734481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.952 [2024-07-12 11:07:19.734486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.952 [2024-07-12 11:07:19.734491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.952 [2024-07-12 11:07:19.736928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.952 [2024-07-12 11:07:19.746261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.952 [2024-07-12 11:07:19.746732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.952 [2024-07-12 11:07:19.746762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.952 [2024-07-12 11:07:19.746770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.952 [2024-07-12 11:07:19.746937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.952 [2024-07-12 11:07:19.747092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.952 [2024-07-12 11:07:19.747098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.952 [2024-07-12 11:07:19.747103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.952 [2024-07-12 11:07:19.749549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.952 [2024-07-12 11:07:19.758889] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.952 [2024-07-12 11:07:19.759525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.952 [2024-07-12 11:07:19.759554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.952 [2024-07-12 11:07:19.759562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.952 [2024-07-12 11:07:19.759730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.952 [2024-07-12 11:07:19.759883] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.952 [2024-07-12 11:07:19.759889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.952 [2024-07-12 11:07:19.759894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.952 [2024-07-12 11:07:19.762337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.952 [2024-07-12 11:07:19.771539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.952 [2024-07-12 11:07:19.772142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.952 [2024-07-12 11:07:19.772172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.952 [2024-07-12 11:07:19.772181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.952 [2024-07-12 11:07:19.772351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.952 [2024-07-12 11:07:19.772505] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.952 [2024-07-12 11:07:19.772512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.952 [2024-07-12 11:07:19.772517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.952 [2024-07-12 11:07:19.774961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.952 [2024-07-12 11:07:19.784156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.952 [2024-07-12 11:07:19.784777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.952 [2024-07-12 11:07:19.784807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.952 [2024-07-12 11:07:19.784815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.952 [2024-07-12 11:07:19.784983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.952 [2024-07-12 11:07:19.785142] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.952 [2024-07-12 11:07:19.785149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.952 [2024-07-12 11:07:19.785154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.952 [2024-07-12 11:07:19.787591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.952 [2024-07-12 11:07:19.796780] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.952 [2024-07-12 11:07:19.797312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.952 [2024-07-12 11:07:19.797327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.952 [2024-07-12 11:07:19.797333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.952 [2024-07-12 11:07:19.797484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.952 [2024-07-12 11:07:19.797635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.952 [2024-07-12 11:07:19.797640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.952 [2024-07-12 11:07:19.797645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.952 [2024-07-12 11:07:19.800076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.952 [2024-07-12 11:07:19.809408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.952 [2024-07-12 11:07:19.809908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.952 [2024-07-12 11:07:19.809920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.952 [2024-07-12 11:07:19.809925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.952 [2024-07-12 11:07:19.810075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.952 [2024-07-12 11:07:19.810230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.952 [2024-07-12 11:07:19.810236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.952 [2024-07-12 11:07:19.810245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.952 [2024-07-12 11:07:19.812678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.952 [2024-07-12 11:07:19.822154] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.952 [2024-07-12 11:07:19.822736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.952 [2024-07-12 11:07:19.822765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.952 [2024-07-12 11:07:19.822773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.952 [2024-07-12 11:07:19.822940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.952 [2024-07-12 11:07:19.823094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.952 [2024-07-12 11:07:19.823101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.952 [2024-07-12 11:07:19.823106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.952 [2024-07-12 11:07:19.825550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.952 [2024-07-12 11:07:19.834885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.952 [2024-07-12 11:07:19.835549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.953 [2024-07-12 11:07:19.835579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.953 [2024-07-12 11:07:19.835588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.953 [2024-07-12 11:07:19.835755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.953 [2024-07-12 11:07:19.835908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.953 [2024-07-12 11:07:19.835914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.953 [2024-07-12 11:07:19.835920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.953 [2024-07-12 11:07:19.838365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.953 [2024-07-12 11:07:19.847556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.953 [2024-07-12 11:07:19.848089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.953 [2024-07-12 11:07:19.848119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.953 [2024-07-12 11:07:19.848134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.953 [2024-07-12 11:07:19.848301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.953 [2024-07-12 11:07:19.848455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.953 [2024-07-12 11:07:19.848461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.953 [2024-07-12 11:07:19.848466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.953 [2024-07-12 11:07:19.850904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.953 [2024-07-12 11:07:19.860245] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.953 [2024-07-12 11:07:19.860887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.953 [2024-07-12 11:07:19.860917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.953 [2024-07-12 11:07:19.860925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.953 [2024-07-12 11:07:19.861092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.953 [2024-07-12 11:07:19.861250] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.953 [2024-07-12 11:07:19.861256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.953 [2024-07-12 11:07:19.861262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.953 [2024-07-12 11:07:19.863701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.953 [2024-07-12 11:07:19.872900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.953 [2024-07-12 11:07:19.873362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.953 [2024-07-12 11:07:19.873391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.953 [2024-07-12 11:07:19.873400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.953 [2024-07-12 11:07:19.873567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.953 [2024-07-12 11:07:19.873720] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.953 [2024-07-12 11:07:19.873726] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.953 [2024-07-12 11:07:19.873732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.953 [2024-07-12 11:07:19.876178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.953 [2024-07-12 11:07:19.885653] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.953 [2024-07-12 11:07:19.886210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.953 [2024-07-12 11:07:19.886224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.953 [2024-07-12 11:07:19.886229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.953 [2024-07-12 11:07:19.886380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.953 [2024-07-12 11:07:19.886531] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.953 [2024-07-12 11:07:19.886537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.953 [2024-07-12 11:07:19.886542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.953 [2024-07-12 11:07:19.888979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.953 [2024-07-12 11:07:19.898313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.953 [2024-07-12 11:07:19.898858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.953 [2024-07-12 11:07:19.898870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.953 [2024-07-12 11:07:19.898875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.953 [2024-07-12 11:07:19.899025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.953 [2024-07-12 11:07:19.899184] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.953 [2024-07-12 11:07:19.899191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.953 [2024-07-12 11:07:19.899195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.953 [2024-07-12 11:07:19.901630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.953 [2024-07-12 11:07:19.910958] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.953 [2024-07-12 11:07:19.911483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.953 [2024-07-12 11:07:19.911494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.953 [2024-07-12 11:07:19.911499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.953 [2024-07-12 11:07:19.911650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.953 [2024-07-12 11:07:19.911800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.953 [2024-07-12 11:07:19.911806] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.953 [2024-07-12 11:07:19.911811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.953 [2024-07-12 11:07:19.914246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2276984 Killed "${NVMF_APP[@]}" "$@" 00:29:02.953 11:07:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:02.953 11:07:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:02.953 11:07:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:02.953 11:07:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:02.953 11:07:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:02.953 [2024-07-12 11:07:19.923574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.953 [2024-07-12 11:07:19.924125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.953 [2024-07-12 11:07:19.924137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:02.953 [2024-07-12 11:07:19.924142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:02.953 [2024-07-12 11:07:19.924292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:02.953 [2024-07-12 11:07:19.924443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.953 [2024-07-12 11:07:19.924449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.953 [2024-07-12 11:07:19.924453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.953 [2024-07-12 11:07:19.926888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.953 11:07:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2278534 00:29:02.953 11:07:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2278534 00:29:02.953 11:07:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:02.953 11:07:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2278534 ']' 00:29:02.953 11:07:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:02.953 11:07:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:02.953 11:07:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:02.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:02.953 11:07:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:02.953 11:07:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:03.216 [2024-07-12 11:07:19.936221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.216 [2024-07-12 11:07:19.936763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.216 [2024-07-12 11:07:19.936794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.216 [2024-07-12 11:07:19.936803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.216 [2024-07-12 11:07:19.936970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.216 [2024-07-12 11:07:19.937130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.216 [2024-07-12 11:07:19.937138] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.216 [2024-07-12 11:07:19.937143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.216 [2024-07-12 11:07:19.939583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.216 [2024-07-12 11:07:19.948952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.216 [2024-07-12 11:07:19.949505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.216 [2024-07-12 11:07:19.949520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.216 [2024-07-12 11:07:19.949525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.216 [2024-07-12 11:07:19.949677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.216 [2024-07-12 11:07:19.949827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.216 [2024-07-12 11:07:19.949833] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.216 [2024-07-12 11:07:19.949838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.216 [2024-07-12 11:07:19.952275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.216 [2024-07-12 11:07:19.961614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.216 [2024-07-12 11:07:19.962135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.216 [2024-07-12 11:07:19.962148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.216 [2024-07-12 11:07:19.962153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.216 [2024-07-12 11:07:19.962304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.216 [2024-07-12 11:07:19.962454] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.217 [2024-07-12 11:07:19.962460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.217 [2024-07-12 11:07:19.962465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.217 [2024-07-12 11:07:19.964902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.217 [2024-07-12 11:07:19.974255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.217 [2024-07-12 11:07:19.974790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.217 [2024-07-12 11:07:19.974801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.217 [2024-07-12 11:07:19.974807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.217 [2024-07-12 11:07:19.974958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.217 [2024-07-12 11:07:19.975108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.217 [2024-07-12 11:07:19.975114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.217 [2024-07-12 11:07:19.975119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.217 [2024-07-12 11:07:19.977556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.217 [2024-07-12 11:07:19.985740] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:03.217 [2024-07-12 11:07:19.985786] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:03.217 [2024-07-12 11:07:19.986884] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.217 [2024-07-12 11:07:19.987493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.217 [2024-07-12 11:07:19.987523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.217 [2024-07-12 11:07:19.987532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.217 [2024-07-12 11:07:19.987701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.217 [2024-07-12 11:07:19.987854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.217 [2024-07-12 11:07:19.987860] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.217 [2024-07-12 11:07:19.987866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.217 [2024-07-12 11:07:19.990311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.217 [2024-07-12 11:07:19.999513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.217 [2024-07-12 11:07:20.000217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.217 [2024-07-12 11:07:20.000247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.217 [2024-07-12 11:07:20.000256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.217 [2024-07-12 11:07:20.000426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.217 [2024-07-12 11:07:20.000580] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.217 [2024-07-12 11:07:20.000586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.217 [2024-07-12 11:07:20.000591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.217 [2024-07-12 11:07:20.003539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.217 [2024-07-12 11:07:20.012178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.217 [2024-07-12 11:07:20.012718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.217 [2024-07-12 11:07:20.012732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.217 [2024-07-12 11:07:20.012738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.217 [2024-07-12 11:07:20.012889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.217 [2024-07-12 11:07:20.013040] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.217 [2024-07-12 11:07:20.013046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.217 [2024-07-12 11:07:20.013051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.217 [2024-07-12 11:07:20.015493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.217 EAL: No free 2048 kB hugepages reported on node 1 00:29:03.217 [2024-07-12 11:07:20.024961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.217 [2024-07-12 11:07:20.025605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.217 [2024-07-12 11:07:20.025635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.217 [2024-07-12 11:07:20.025644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.217 [2024-07-12 11:07:20.025812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.217 [2024-07-12 11:07:20.025966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.217 [2024-07-12 11:07:20.025973] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.217 [2024-07-12 11:07:20.025979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.217 [2024-07-12 11:07:20.028423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.217 [2024-07-12 11:07:20.037724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.217 [2024-07-12 11:07:20.038435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.217 [2024-07-12 11:07:20.038465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.217 [2024-07-12 11:07:20.038474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.217 [2024-07-12 11:07:20.038642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.217 [2024-07-12 11:07:20.038796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.217 [2024-07-12 11:07:20.038802] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.217 [2024-07-12 11:07:20.038808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.217 [2024-07-12 11:07:20.041255] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.217 [2024-07-12 11:07:20.050449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.217 [2024-07-12 11:07:20.051083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.217 [2024-07-12 11:07:20.051112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.217 [2024-07-12 11:07:20.051127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.217 [2024-07-12 11:07:20.051302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.217 [2024-07-12 11:07:20.051456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.217 [2024-07-12 11:07:20.051462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.217 [2024-07-12 11:07:20.051467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.217 [2024-07-12 11:07:20.053907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.217 [2024-07-12 11:07:20.063090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.217 [2024-07-12 11:07:20.063429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.217 [2024-07-12 11:07:20.063444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.217 [2024-07-12 11:07:20.063450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.217 [2024-07-12 11:07:20.063602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.217 [2024-07-12 11:07:20.063753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.217 [2024-07-12 11:07:20.063759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.217 [2024-07-12 11:07:20.063764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.217 [2024-07-12 11:07:20.066202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.217 [2024-07-12 11:07:20.067789] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:03.217 [2024-07-12 11:07:20.075840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.217 [2024-07-12 11:07:20.076261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.217 [2024-07-12 11:07:20.076275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.217 [2024-07-12 11:07:20.076280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.217 [2024-07-12 11:07:20.076432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.217 [2024-07-12 11:07:20.076583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.217 [2024-07-12 11:07:20.076588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.217 [2024-07-12 11:07:20.076594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.217 [2024-07-12 11:07:20.079027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.217 [2024-07-12 11:07:20.090045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.217 [2024-07-12 11:07:20.090599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.218 [2024-07-12 11:07:20.090612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.218 [2024-07-12 11:07:20.090618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.218 [2024-07-12 11:07:20.090771] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.218 [2024-07-12 11:07:20.090922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.218 [2024-07-12 11:07:20.090932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.218 [2024-07-12 11:07:20.090938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.218 [2024-07-12 11:07:20.093377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.218 [2024-07-12 11:07:20.102716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.218 [2024-07-12 11:07:20.103442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.218 [2024-07-12 11:07:20.103474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.218 [2024-07-12 11:07:20.103484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.218 [2024-07-12 11:07:20.103657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.218 [2024-07-12 11:07:20.103810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.218 [2024-07-12 11:07:20.103817] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.218 [2024-07-12 11:07:20.103823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.218 [2024-07-12 11:07:20.106271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.218 [2024-07-12 11:07:20.115466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.218 [2024-07-12 11:07:20.116011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.218 [2024-07-12 11:07:20.116042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.218 [2024-07-12 11:07:20.116051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.218 [2024-07-12 11:07:20.116225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.218 [2024-07-12 11:07:20.116379] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.218 [2024-07-12 11:07:20.116385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.218 [2024-07-12 11:07:20.116390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.218 [2024-07-12 11:07:20.118829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.218 [2024-07-12 11:07:20.124573] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:03.218 [2024-07-12 11:07:20.124597] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:03.218 [2024-07-12 11:07:20.124603] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:03.218 [2024-07-12 11:07:20.124608] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:03.218 [2024-07-12 11:07:20.124613] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:03.218 [2024-07-12 11:07:20.124768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:03.218 [2024-07-12 11:07:20.124895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:03.218 [2024-07-12 11:07:20.124897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:03.218 [2024-07-12 11:07:20.128171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.218 [2024-07-12 11:07:20.128688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.218 [2024-07-12 11:07:20.128702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.218 [2024-07-12 11:07:20.128713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.218 [2024-07-12 11:07:20.128865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.218 [2024-07-12 11:07:20.129015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.218 [2024-07-12 11:07:20.129021] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.218 [2024-07-12 11:07:20.129026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.218 [2024-07-12 11:07:20.131466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.218 [2024-07-12 11:07:20.140801] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.218 [2024-07-12 11:07:20.141362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.218 [2024-07-12 11:07:20.141375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.218 [2024-07-12 11:07:20.141381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.218 [2024-07-12 11:07:20.141533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.218 [2024-07-12 11:07:20.141684] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.218 [2024-07-12 11:07:20.141690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.218 [2024-07-12 11:07:20.141695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.218 [2024-07-12 11:07:20.144136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.218 [2024-07-12 11:07:20.153468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.218 [2024-07-12 11:07:20.153753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.218 [2024-07-12 11:07:20.153766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.218 [2024-07-12 11:07:20.153772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.218 [2024-07-12 11:07:20.153923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.218 [2024-07-12 11:07:20.154074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.218 [2024-07-12 11:07:20.154080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.218 [2024-07-12 11:07:20.154085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.218 [2024-07-12 11:07:20.156523] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.218 [2024-07-12 11:07:20.166154] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.218 [2024-07-12 11:07:20.166689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.218 [2024-07-12 11:07:20.166702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.218 [2024-07-12 11:07:20.166707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.218 [2024-07-12 11:07:20.166859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.218 [2024-07-12 11:07:20.167009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.218 [2024-07-12 11:07:20.167019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.218 [2024-07-12 11:07:20.167025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.218 [2024-07-12 11:07:20.169461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.218 [2024-07-12 11:07:20.178803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.218 [2024-07-12 11:07:20.179205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.218 [2024-07-12 11:07:20.179218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.218 [2024-07-12 11:07:20.179223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.218 [2024-07-12 11:07:20.179375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.218 [2024-07-12 11:07:20.179525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.218 [2024-07-12 11:07:20.179531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.218 [2024-07-12 11:07:20.179536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.218 [2024-07-12 11:07:20.181967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.218 [2024-07-12 11:07:20.191442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.218 [2024-07-12 11:07:20.192002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.218 [2024-07-12 11:07:20.192014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.218 [2024-07-12 11:07:20.192019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.218 [2024-07-12 11:07:20.192173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.218 [2024-07-12 11:07:20.192324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.218 [2024-07-12 11:07:20.192330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.218 [2024-07-12 11:07:20.192335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.218 [2024-07-12 11:07:20.194768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.481 [2024-07-12 11:07:20.204096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.481 [2024-07-12 11:07:20.204630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.481 [2024-07-12 11:07:20.204642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.481 [2024-07-12 11:07:20.204647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.481 [2024-07-12 11:07:20.204798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.481 [2024-07-12 11:07:20.204950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.481 [2024-07-12 11:07:20.204956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.481 [2024-07-12 11:07:20.204960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.481 [2024-07-12 11:07:20.207394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.481 [2024-07-12 11:07:20.216726] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.481 [2024-07-12 11:07:20.217242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.481 [2024-07-12 11:07:20.217254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.481 [2024-07-12 11:07:20.217259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.481 [2024-07-12 11:07:20.217410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.481 [2024-07-12 11:07:20.217560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.481 [2024-07-12 11:07:20.217566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.481 [2024-07-12 11:07:20.217571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.481 [2024-07-12 11:07:20.220001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.481 [2024-07-12 11:07:20.229473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.481 [2024-07-12 11:07:20.230107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.481 [2024-07-12 11:07:20.230118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.481 [2024-07-12 11:07:20.230127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.481 [2024-07-12 11:07:20.230278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.481 [2024-07-12 11:07:20.230429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.481 [2024-07-12 11:07:20.230435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.481 [2024-07-12 11:07:20.230440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.481 [2024-07-12 11:07:20.232872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.481 [2024-07-12 11:07:20.242205] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.481 [2024-07-12 11:07:20.242826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.481 [2024-07-12 11:07:20.242859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.481 [2024-07-12 11:07:20.242868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.481 [2024-07-12 11:07:20.243040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.481 [2024-07-12 11:07:20.243201] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.481 [2024-07-12 11:07:20.243208] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.481 [2024-07-12 11:07:20.243214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.481 [2024-07-12 11:07:20.245654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.481 [2024-07-12 11:07:20.254842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.481 [2024-07-12 11:07:20.255365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.481 [2024-07-12 11:07:20.255380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.481 [2024-07-12 11:07:20.255385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.481 [2024-07-12 11:07:20.255540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.481 [2024-07-12 11:07:20.255691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.481 [2024-07-12 11:07:20.255697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.481 [2024-07-12 11:07:20.255702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.481 [2024-07-12 11:07:20.258139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.481 [2024-07-12 11:07:20.267469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.481 [2024-07-12 11:07:20.267871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.481 [2024-07-12 11:07:20.267883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.481 [2024-07-12 11:07:20.267888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.481 [2024-07-12 11:07:20.268039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.481 [2024-07-12 11:07:20.268193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.481 [2024-07-12 11:07:20.268200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.481 [2024-07-12 11:07:20.268205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.481 [2024-07-12 11:07:20.270639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.482 [2024-07-12 11:07:20.280126] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.482 [2024-07-12 11:07:20.280661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.482 [2024-07-12 11:07:20.280673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.482 [2024-07-12 11:07:20.280678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.482 [2024-07-12 11:07:20.280829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.482 [2024-07-12 11:07:20.280979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.482 [2024-07-12 11:07:20.280986] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.482 [2024-07-12 11:07:20.280991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.482 [2024-07-12 11:07:20.283429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.482 [2024-07-12 11:07:20.292761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.482 [2024-07-12 11:07:20.293386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.482 [2024-07-12 11:07:20.293417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.482 [2024-07-12 11:07:20.293426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.482 [2024-07-12 11:07:20.293596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.482 [2024-07-12 11:07:20.293750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.482 [2024-07-12 11:07:20.293757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.482 [2024-07-12 11:07:20.293768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.482 [2024-07-12 11:07:20.296220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.482 [2024-07-12 11:07:20.305422] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.482 [2024-07-12 11:07:20.306063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.482 [2024-07-12 11:07:20.306093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.482 [2024-07-12 11:07:20.306102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.482 [2024-07-12 11:07:20.306278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.482 [2024-07-12 11:07:20.306432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.482 [2024-07-12 11:07:20.306439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.482 [2024-07-12 11:07:20.306445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.482 [2024-07-12 11:07:20.308883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.482 [2024-07-12 11:07:20.318073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.482 [2024-07-12 11:07:20.318645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.482 [2024-07-12 11:07:20.318659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.482 [2024-07-12 11:07:20.318665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.482 [2024-07-12 11:07:20.318816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.482 [2024-07-12 11:07:20.318967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.482 [2024-07-12 11:07:20.318972] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.482 [2024-07-12 11:07:20.318977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.482 [2024-07-12 11:07:20.321417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.482 [2024-07-12 11:07:20.330748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.482 [2024-07-12 11:07:20.331257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.482 [2024-07-12 11:07:20.331270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.482 [2024-07-12 11:07:20.331275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.482 [2024-07-12 11:07:20.331426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.482 [2024-07-12 11:07:20.331577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.482 [2024-07-12 11:07:20.331582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.482 [2024-07-12 11:07:20.331587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.482 [2024-07-12 11:07:20.334019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.482 [2024-07-12 11:07:20.343494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.482 [2024-07-12 11:07:20.344145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.482 [2024-07-12 11:07:20.344179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.482 [2024-07-12 11:07:20.344187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.482 [2024-07-12 11:07:20.344355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.482 [2024-07-12 11:07:20.344509] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.482 [2024-07-12 11:07:20.344515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.482 [2024-07-12 11:07:20.344521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.482 [2024-07-12 11:07:20.346964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.482 [2024-07-12 11:07:20.356158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.482 [2024-07-12 11:07:20.356743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.482 [2024-07-12 11:07:20.356760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.482 [2024-07-12 11:07:20.356768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.482 [2024-07-12 11:07:20.356922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.482 [2024-07-12 11:07:20.357073] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.482 [2024-07-12 11:07:20.357079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.482 [2024-07-12 11:07:20.357083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.482 [2024-07-12 11:07:20.359525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.482 [2024-07-12 11:07:20.368859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.482 [2024-07-12 11:07:20.369373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.482 [2024-07-12 11:07:20.369386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.482 [2024-07-12 11:07:20.369391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.482 [2024-07-12 11:07:20.369542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.482 [2024-07-12 11:07:20.369693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.482 [2024-07-12 11:07:20.369698] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.482 [2024-07-12 11:07:20.369703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.482 [2024-07-12 11:07:20.372136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.482 [2024-07-12 11:07:20.381473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.482 [2024-07-12 11:07:20.382084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.482 [2024-07-12 11:07:20.382113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.482 [2024-07-12 11:07:20.382128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.482 [2024-07-12 11:07:20.382297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.482 [2024-07-12 11:07:20.382454] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.482 [2024-07-12 11:07:20.382460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.482 [2024-07-12 11:07:20.382466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.482 [2024-07-12 11:07:20.384902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.482 [2024-07-12 11:07:20.394085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.482 [2024-07-12 11:07:20.394499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.482 [2024-07-12 11:07:20.394513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.482 [2024-07-12 11:07:20.394519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.482 [2024-07-12 11:07:20.394670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.482 [2024-07-12 11:07:20.394821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.482 [2024-07-12 11:07:20.394826] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.482 [2024-07-12 11:07:20.394831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.482 [2024-07-12 11:07:20.397266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.482 [2024-07-12 11:07:20.406733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.482 [2024-07-12 11:07:20.407070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.482 [2024-07-12 11:07:20.407082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.482 [2024-07-12 11:07:20.407087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.483 [2024-07-12 11:07:20.407242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.483 [2024-07-12 11:07:20.407393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.483 [2024-07-12 11:07:20.407399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.483 [2024-07-12 11:07:20.407404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.483 [2024-07-12 11:07:20.409836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.483 [2024-07-12 11:07:20.419442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.483 [2024-07-12 11:07:20.419940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.483 [2024-07-12 11:07:20.419951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.483 [2024-07-12 11:07:20.419957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.483 [2024-07-12 11:07:20.420107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.483 [2024-07-12 11:07:20.420262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.483 [2024-07-12 11:07:20.420267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.483 [2024-07-12 11:07:20.420272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.483 [2024-07-12 11:07:20.422706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.483 [2024-07-12 11:07:20.432171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.483 [2024-07-12 11:07:20.432727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.483 [2024-07-12 11:07:20.432739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.483 [2024-07-12 11:07:20.432744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.483 [2024-07-12 11:07:20.432895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.483 [2024-07-12 11:07:20.433045] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.483 [2024-07-12 11:07:20.433051] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.483 [2024-07-12 11:07:20.433056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.483 [2024-07-12 11:07:20.435490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.483 [2024-07-12 11:07:20.444809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.483 [2024-07-12 11:07:20.445157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.483 [2024-07-12 11:07:20.445173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.483 [2024-07-12 11:07:20.445179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.483 [2024-07-12 11:07:20.445333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.483 [2024-07-12 11:07:20.445485] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.483 [2024-07-12 11:07:20.445491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.483 [2024-07-12 11:07:20.445495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.483 [2024-07-12 11:07:20.447930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.483 [2024-07-12 11:07:20.457640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.483 [2024-07-12 11:07:20.458227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.483 [2024-07-12 11:07:20.458257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.483 [2024-07-12 11:07:20.458265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.483 [2024-07-12 11:07:20.458435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.483 [2024-07-12 11:07:20.458589] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.483 [2024-07-12 11:07:20.458595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.483 [2024-07-12 11:07:20.458600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.483 [2024-07-12 11:07:20.461042] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.745 [2024-07-12 11:07:20.470373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.745 [2024-07-12 11:07:20.470970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.745 [2024-07-12 11:07:20.471000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.745 [2024-07-12 11:07:20.471013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.745 [2024-07-12 11:07:20.471344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.745 [2024-07-12 11:07:20.471536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.745 [2024-07-12 11:07:20.471543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.745 [2024-07-12 11:07:20.471548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.745 [2024-07-12 11:07:20.473985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.745 [2024-07-12 11:07:20.483034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.745 [2024-07-12 11:07:20.483723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.745 [2024-07-12 11:07:20.483753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.745 [2024-07-12 11:07:20.483762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.745 [2024-07-12 11:07:20.483930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.745 [2024-07-12 11:07:20.484083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.745 [2024-07-12 11:07:20.484089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.745 [2024-07-12 11:07:20.484095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.745 [2024-07-12 11:07:20.486537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.745 [2024-07-12 11:07:20.495720] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.745 [2024-07-12 11:07:20.496432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.745 [2024-07-12 11:07:20.496461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.746 [2024-07-12 11:07:20.496470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.746 [2024-07-12 11:07:20.496638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.746 [2024-07-12 11:07:20.496791] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.746 [2024-07-12 11:07:20.496797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.746 [2024-07-12 11:07:20.496803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.746 [2024-07-12 11:07:20.499245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.746 [2024-07-12 11:07:20.508430] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.746 [2024-07-12 11:07:20.509009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.746 [2024-07-12 11:07:20.509023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.746 [2024-07-12 11:07:20.509028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.746 [2024-07-12 11:07:20.509185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.746 [2024-07-12 11:07:20.509337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.746 [2024-07-12 11:07:20.509346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.746 [2024-07-12 11:07:20.509351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.746 [2024-07-12 11:07:20.511786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.746 [2024-07-12 11:07:20.521148] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.746 [2024-07-12 11:07:20.521701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.746 [2024-07-12 11:07:20.521713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.746 [2024-07-12 11:07:20.521718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.746 [2024-07-12 11:07:20.521869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.746 [2024-07-12 11:07:20.522020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.746 [2024-07-12 11:07:20.522025] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.746 [2024-07-12 11:07:20.522030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.746 [2024-07-12 11:07:20.524463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.746 [2024-07-12 11:07:20.533785] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.746 [2024-07-12 11:07:20.534188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.746 [2024-07-12 11:07:20.534203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.746 [2024-07-12 11:07:20.534208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.746 [2024-07-12 11:07:20.534360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.746 [2024-07-12 11:07:20.534511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.746 [2024-07-12 11:07:20.534517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.746 [2024-07-12 11:07:20.534521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.746 [2024-07-12 11:07:20.536954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.746 [2024-07-12 11:07:20.546425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.746 [2024-07-12 11:07:20.546977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.746 [2024-07-12 11:07:20.546988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.746 [2024-07-12 11:07:20.546993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.746 [2024-07-12 11:07:20.547147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.746 [2024-07-12 11:07:20.547298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.746 [2024-07-12 11:07:20.547304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.746 [2024-07-12 11:07:20.547309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.746 [2024-07-12 11:07:20.549739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.746 [2024-07-12 11:07:20.559068] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.746 [2024-07-12 11:07:20.559627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.746 [2024-07-12 11:07:20.559639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.746 [2024-07-12 11:07:20.559644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.746 [2024-07-12 11:07:20.559795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.746 [2024-07-12 11:07:20.559945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.746 [2024-07-12 11:07:20.559951] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.746 [2024-07-12 11:07:20.559956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.746 [2024-07-12 11:07:20.562389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.746 [2024-07-12 11:07:20.571713] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.746 [2024-07-12 11:07:20.572225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.746 [2024-07-12 11:07:20.572236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.746 [2024-07-12 11:07:20.572241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.746 [2024-07-12 11:07:20.572392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.746 [2024-07-12 11:07:20.572543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.746 [2024-07-12 11:07:20.572548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.746 [2024-07-12 11:07:20.572553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.746 [2024-07-12 11:07:20.574985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.746 [2024-07-12 11:07:20.584512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.746 [2024-07-12 11:07:20.585113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.746 [2024-07-12 11:07:20.585147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.746 [2024-07-12 11:07:20.585156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.746 [2024-07-12 11:07:20.585325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.746 [2024-07-12 11:07:20.585479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.746 [2024-07-12 11:07:20.585485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.746 [2024-07-12 11:07:20.585490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.746 [2024-07-12 11:07:20.587929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.746 [2024-07-12 11:07:20.597270] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.746 [2024-07-12 11:07:20.597804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.746 [2024-07-12 11:07:20.597817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.746 [2024-07-12 11:07:20.597823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.746 [2024-07-12 11:07:20.597978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.746 [2024-07-12 11:07:20.598135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.746 [2024-07-12 11:07:20.598142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.746 [2024-07-12 11:07:20.598146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.746 [2024-07-12 11:07:20.600579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.746 [2024-07-12 11:07:20.609908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.746 [2024-07-12 11:07:20.610528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.746 [2024-07-12 11:07:20.610558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.746 [2024-07-12 11:07:20.610567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.746 [2024-07-12 11:07:20.610735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.746 [2024-07-12 11:07:20.610889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.746 [2024-07-12 11:07:20.610895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.746 [2024-07-12 11:07:20.610900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.746 [2024-07-12 11:07:20.613343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.746 [2024-07-12 11:07:20.622534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.746 [2024-07-12 11:07:20.623202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.746 [2024-07-12 11:07:20.623232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.746 [2024-07-12 11:07:20.623240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.746 [2024-07-12 11:07:20.623410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.746 [2024-07-12 11:07:20.623564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.746 [2024-07-12 11:07:20.623570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.746 [2024-07-12 11:07:20.623575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.747 [2024-07-12 11:07:20.626019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.747 [2024-07-12 11:07:20.635206] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.747 [2024-07-12 11:07:20.635754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.747 [2024-07-12 11:07:20.635768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.747 [2024-07-12 11:07:20.635773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.747 [2024-07-12 11:07:20.635925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.747 [2024-07-12 11:07:20.636075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.747 [2024-07-12 11:07:20.636081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.747 [2024-07-12 11:07:20.636089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.747 [2024-07-12 11:07:20.638526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.747 [2024-07-12 11:07:20.647850] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.747 [2024-07-12 11:07:20.648503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.747 [2024-07-12 11:07:20.648533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.747 [2024-07-12 11:07:20.648542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.747 [2024-07-12 11:07:20.648709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.747 [2024-07-12 11:07:20.648863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.747 [2024-07-12 11:07:20.648869] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.747 [2024-07-12 11:07:20.648875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.747 [2024-07-12 11:07:20.651316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.747 [2024-07-12 11:07:20.660505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.747 [2024-07-12 11:07:20.661168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.747 [2024-07-12 11:07:20.661198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.747 [2024-07-12 11:07:20.661207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.747 [2024-07-12 11:07:20.661376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.747 [2024-07-12 11:07:20.661530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.747 [2024-07-12 11:07:20.661536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.747 [2024-07-12 11:07:20.661541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.747 [2024-07-12 11:07:20.663984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.747 [2024-07-12 11:07:20.673169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.747 [2024-07-12 11:07:20.673766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.747 [2024-07-12 11:07:20.673795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.747 [2024-07-12 11:07:20.673803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.747 [2024-07-12 11:07:20.673971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.747 [2024-07-12 11:07:20.674132] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.747 [2024-07-12 11:07:20.674138] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.747 [2024-07-12 11:07:20.674143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.747 [2024-07-12 11:07:20.676589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.747 [2024-07-12 11:07:20.685778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.747 [2024-07-12 11:07:20.686326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.747 [2024-07-12 11:07:20.686344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.747 [2024-07-12 11:07:20.686350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.747 [2024-07-12 11:07:20.686502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.747 [2024-07-12 11:07:20.686654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.747 [2024-07-12 11:07:20.686660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.747 [2024-07-12 11:07:20.686665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.747 [2024-07-12 11:07:20.689097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.747 [2024-07-12 11:07:20.698435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.747 [2024-07-12 11:07:20.698980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.747 [2024-07-12 11:07:20.698992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.747 [2024-07-12 11:07:20.698998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.747 [2024-07-12 11:07:20.699153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.747 [2024-07-12 11:07:20.699305] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.747 [2024-07-12 11:07:20.699311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.747 [2024-07-12 11:07:20.699316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.747 [2024-07-12 11:07:20.701747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.747 [2024-07-12 11:07:20.711074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.747 [2024-07-12 11:07:20.711709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.747 [2024-07-12 11:07:20.711739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.747 [2024-07-12 11:07:20.711748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.747 [2024-07-12 11:07:20.711915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.747 [2024-07-12 11:07:20.712069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.747 [2024-07-12 11:07:20.712075] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.747 [2024-07-12 11:07:20.712080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.747 [2024-07-12 11:07:20.714523] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.747 [2024-07-12 11:07:20.723708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.747 [2024-07-12 11:07:20.724220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.747 [2024-07-12 11:07:20.724235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:03.747 [2024-07-12 11:07:20.724240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:03.747 [2024-07-12 11:07:20.724392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:03.747 [2024-07-12 11:07:20.724546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.747 [2024-07-12 11:07:20.724552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.747 [2024-07-12 11:07:20.724557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.747 [2024-07-12 11:07:20.726989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.009 [2024-07-12 11:07:20.736316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.009 [2024-07-12 11:07:20.736871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.009 [2024-07-12 11:07:20.736883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:04.009 [2024-07-12 11:07:20.736888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:04.009 [2024-07-12 11:07:20.737038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:04.009 [2024-07-12 11:07:20.737193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.009 [2024-07-12 11:07:20.737199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.009 [2024-07-12 11:07:20.737204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.009 [2024-07-12 11:07:20.739635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.009 [2024-07-12 11:07:20.748953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.009 [2024-07-12 11:07:20.749603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.009 [2024-07-12 11:07:20.749632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:04.009 [2024-07-12 11:07:20.749641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:04.009 [2024-07-12 11:07:20.749808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:04.009 [2024-07-12 11:07:20.749962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.009 [2024-07-12 11:07:20.749968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.010 [2024-07-12 11:07:20.749973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.010 [2024-07-12 11:07:20.752415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.010 11:07:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:04.010 11:07:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:29:04.010 11:07:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:04.010 11:07:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:04.010 11:07:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:04.010 [2024-07-12 11:07:20.761609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.010 [2024-07-12 11:07:20.762260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.010 [2024-07-12 11:07:20.762290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:04.010 [2024-07-12 11:07:20.762299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:04.010 [2024-07-12 11:07:20.762469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:04.010 [2024-07-12 11:07:20.762626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.010 [2024-07-12 11:07:20.762632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.010 [2024-07-12 11:07:20.762638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.010 [2024-07-12 11:07:20.765081] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.010 [2024-07-12 11:07:20.774279] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.010 [2024-07-12 11:07:20.774910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.010 [2024-07-12 11:07:20.774940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:04.010 [2024-07-12 11:07:20.774948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:04.010 [2024-07-12 11:07:20.775116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:04.010 [2024-07-12 11:07:20.775282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.010 [2024-07-12 11:07:20.775289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.010 [2024-07-12 11:07:20.775295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.010 [2024-07-12 11:07:20.777733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.010 [2024-07-12 11:07:20.786924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.010 [2024-07-12 11:07:20.787623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.010 [2024-07-12 11:07:20.787653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:04.010 [2024-07-12 11:07:20.787662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:04.010 [2024-07-12 11:07:20.787829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:04.010 [2024-07-12 11:07:20.787983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.010 [2024-07-12 11:07:20.787989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.010 [2024-07-12 11:07:20.787994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.010 [2024-07-12 11:07:20.790438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.010 11:07:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:04.010 11:07:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:04.010 11:07:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.010 11:07:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:04.010 [2024-07-12 11:07:20.799186] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:04.010 [2024-07-12 11:07:20.799629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.010 [2024-07-12 11:07:20.800223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.010 [2024-07-12 11:07:20.800253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:04.010 [2024-07-12 11:07:20.800261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:04.010 [2024-07-12 11:07:20.800431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:04.010 [2024-07-12 11:07:20.800588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.010 [2024-07-12 11:07:20.800595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.010 [2024-07-12 11:07:20.800600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.010 [2024-07-12 11:07:20.803042] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.010 11:07:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.010 11:07:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:04.010 11:07:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.010 11:07:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:04.010 [2024-07-12 11:07:20.812374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.010 [2024-07-12 11:07:20.812974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.010 [2024-07-12 11:07:20.813004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:04.010 [2024-07-12 11:07:20.813012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:04.010 [2024-07-12 11:07:20.813185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:04.010 [2024-07-12 11:07:20.813340] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.010 [2024-07-12 11:07:20.813346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.010 [2024-07-12 11:07:20.813351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.010 [2024-07-12 11:07:20.815789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.010 [2024-07-12 11:07:20.825116] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.010 [2024-07-12 11:07:20.825754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.010 [2024-07-12 11:07:20.825784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:04.010 [2024-07-12 11:07:20.825792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:04.010 [2024-07-12 11:07:20.825960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:04.010 [2024-07-12 11:07:20.826113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.010 [2024-07-12 11:07:20.826120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.010 [2024-07-12 11:07:20.826132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.010 [2024-07-12 11:07:20.828569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.010 Malloc0 00:29:04.010 11:07:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.010 11:07:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:04.010 11:07:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.010 11:07:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:04.010 [2024-07-12 11:07:20.837750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.010 [2024-07-12 11:07:20.838298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.010 [2024-07-12 11:07:20.838313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:04.010 [2024-07-12 11:07:20.838323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:04.010 [2024-07-12 11:07:20.838474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:04.010 [2024-07-12 11:07:20.838625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.010 [2024-07-12 11:07:20.838631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.010 [2024-07-12 11:07:20.838636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.010 [2024-07-12 11:07:20.841071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.010 11:07:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.010 11:07:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:04.010 11:07:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.010 11:07:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:04.010 [2024-07-12 11:07:20.850395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.010 [2024-07-12 11:07:20.850952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.010 [2024-07-12 11:07:20.850964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecc3b0 with addr=10.0.0.2, port=4420 00:29:04.010 [2024-07-12 11:07:20.850969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc3b0 is same with the state(5) to be set 00:29:04.010 [2024-07-12 11:07:20.851120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc3b0 (9): Bad file descriptor 00:29:04.010 [2024-07-12 11:07:20.851276] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.010 [2024-07-12 11:07:20.851282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.010 [2024-07-12 11:07:20.851287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.010 [2024-07-12 11:07:20.853719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.010 11:07:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.010 11:07:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:04.010 11:07:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.010 11:07:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:04.010 [2024-07-12 11:07:20.862662] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:04.011 [2024-07-12 11:07:20.863042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.011 11:07:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.011 11:07:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2277395 00:29:04.011 [2024-07-12 11:07:20.896002] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:14.012 00:29:14.012 Latency(us) 00:29:14.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.012 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:14.012 Verification LBA range: start 0x0 length 0x4000 00:29:14.012 Nvme1n1 : 15.00 9427.99 36.83 12883.80 0.00 5717.09 754.35 13926.40 00:29:14.012 =================================================================================================================== 00:29:14.012 Total : 9427.99 36.83 12883.80 0.00 5717.09 754.35 13926.40 00:29:14.012 11:07:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:14.012 11:07:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:14.012 11:07:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:14.012 11:07:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:14.012 11:07:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:14.012 11:07:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:14.012 11:07:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:14.012 11:07:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:14.012 11:07:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:29:14.012 11:07:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:14.012 11:07:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:29:14.012 11:07:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:14.012 11:07:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:14.012 rmmod nvme_tcp 00:29:14.012 rmmod nvme_fabrics 00:29:14.012 rmmod nvme_keyring 00:29:14.012 11:07:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:14.012 11:07:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:29:14.012 11:07:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:29:14.012 11:07:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2278534 ']' 00:29:14.012 11:07:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2278534 00:29:14.012 11:07:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 2278534 ']' 00:29:14.012 11:07:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 2278534 00:29:14.012 11:07:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:29:14.012 11:07:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:14.012 11:07:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2278534 00:29:14.012 11:07:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:14.012 11:07:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:14.012 11:07:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2278534' 00:29:14.012 killing process with pid 2278534 00:29:14.012 11:07:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 2278534 00:29:14.012 11:07:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 2278534 00:29:14.012 11:07:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:14.012 11:07:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:14.012 11:07:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:14.012 11:07:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:14.012 11:07:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:14.012 11:07:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.012 11:07:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:14.012 11:07:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.957 11:07:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:14.957 00:29:14.957 real 0m27.954s 00:29:14.957 user 1m3.156s 00:29:14.957 sys 0m7.291s 00:29:14.957 11:07:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:14.957 11:07:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:14.957 ************************************ 00:29:14.957 END TEST nvmf_bdevperf 00:29:14.957 ************************************ 00:29:14.957 11:07:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:14.957 11:07:31 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:14.957 11:07:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:14.957 11:07:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:14.957 11:07:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:15.218 ************************************ 00:29:15.218 START TEST nvmf_target_disconnect 00:29:15.218 ************************************ 00:29:15.218 11:07:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:15.218 * Looking for test storage... 00:29:15.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:29:15.218 11:07:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:23.358 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:23.358 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:29:23.358 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:23.358 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:23.358 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:23.358 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:23.358 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:23.358 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:29:23.358 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:23.358 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:29:23.358 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:29:23.358 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:29:23.358 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:29:23.358 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:29:23.358 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:29:23.358 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:23.358 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:23.358 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:23.358 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:23.358 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:23.358 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:23.359 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:23.359 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:23.359 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:23.359 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:23.359 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:23.359 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.525 ms 00:29:23.359 00:29:23.359 --- 10.0.0.2 ping statistics --- 00:29:23.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.359 rtt min/avg/max/mdev = 0.525/0.525/0.525/0.000 ms 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:23.359 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:23.359 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.349 ms 00:29:23.359 00:29:23.359 --- 10.0.0.1 ping statistics --- 00:29:23.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.359 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:23.359 ************************************ 00:29:23.359 START TEST nvmf_target_disconnect_tc1 00:29:23.359 ************************************ 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:23.359 11:07:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:23.359 EAL: No free 2048 kB hugepages reported on node 1 00:29:23.359 [2024-07-12 11:07:39.550678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.359 [2024-07-12 11:07:39.550743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1ce20 with addr=10.0.0.2, port=4420 00:29:23.359 [2024-07-12 11:07:39.550770] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:23.359 [2024-07-12 11:07:39.550781] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:23.359 [2024-07-12 11:07:39.550789] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:23.360 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:23.360 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:23.360 Initializing NVMe Controllers 00:29:23.360 11:07:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:29:23.360 11:07:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:23.360 11:07:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:23.360 11:07:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:23.360 00:29:23.360 real 0m0.131s 00:29:23.360 user 0m0.050s 00:29:23.360 sys 0m0.080s 00:29:23.360 11:07:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:23.360 11:07:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:23.360 ************************************ 00:29:23.360 END TEST nvmf_target_disconnect_tc1 00:29:23.360 ************************************ 00:29:23.360 11:07:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:29:23.360 11:07:39 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:23.360 11:07:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:23.360 11:07:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:23.360 11:07:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:23.360 ************************************ 00:29:23.360 START TEST nvmf_target_disconnect_tc2 00:29:23.360 ************************************ 00:29:23.360 11:07:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:29:23.360 11:07:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:23.360 11:07:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:23.360 11:07:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:23.360 11:07:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:23.360 11:07:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:23.360 11:07:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2284555 00:29:23.360 11:07:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2284555 00:29:23.360 11:07:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:23.360 11:07:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2284555 ']' 00:29:23.360 11:07:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:23.360 11:07:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:23.360 11:07:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:23.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:23.360 11:07:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:23.360 11:07:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:23.360 [2024-07-12 11:07:39.713028] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:23.360 [2024-07-12 11:07:39.713099] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:23.360 EAL: No free 2048 kB hugepages reported on node 1 00:29:23.360 [2024-07-12 11:07:39.800842] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:23.360 [2024-07-12 11:07:39.895888] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:23.360 [2024-07-12 11:07:39.895952] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:23.360 [2024-07-12 11:07:39.895960] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:23.360 [2024-07-12 11:07:39.895966] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:23.360 [2024-07-12 11:07:39.895972] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:23.360 [2024-07-12 11:07:39.896163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:23.360 [2024-07-12 11:07:39.896411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:23.360 [2024-07-12 11:07:39.896574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:23.360 [2024-07-12 11:07:39.896575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:23.622 11:07:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:23.622 11:07:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:23.622 11:07:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:23.622 11:07:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:23.622 11:07:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:23.622 11:07:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:23.622 11:07:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:23.622 11:07:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.622 11:07:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:23.622 Malloc0 00:29:23.622 11:07:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.622 11:07:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:23.622 11:07:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.622 11:07:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:23.622 [2024-07-12 11:07:40.580306] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:23.622 11:07:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.622 11:07:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:23.622 11:07:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.622 11:07:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:23.622 11:07:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.622 11:07:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:23.622 11:07:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.622 11:07:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:23.884 11:07:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.884 11:07:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:23.884 11:07:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.884 11:07:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:23.884 [2024-07-12 11:07:40.620636] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:23.884 11:07:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.884 11:07:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:23.884 11:07:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.884 11:07:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:23.884 11:07:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.884 11:07:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2284801 00:29:23.884 11:07:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:23.884 11:07:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:23.884 EAL: No free 2048 kB hugepages reported on node 1 00:29:25.806 11:07:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2284555 00:29:25.806 11:07:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:25.806 Read completed with error (sct=0, sc=8) 00:29:25.806 starting I/O failed 00:29:25.806 Read completed with error (sct=0, sc=8) 00:29:25.806 starting I/O failed 00:29:25.806 Read completed with error (sct=0, sc=8) 00:29:25.806 starting I/O failed 00:29:25.806 Read completed with error (sct=0, sc=8) 00:29:25.806 starting I/O failed 00:29:25.806 Read completed with error (sct=0, sc=8) 00:29:25.806 starting I/O failed 00:29:25.806 Read completed with error (sct=0, sc=8) 00:29:25.806 starting I/O failed 00:29:25.806 Read completed with error (sct=0, sc=8) 00:29:25.806 starting I/O failed 00:29:25.806 Read completed with error (sct=0, sc=8) 00:29:25.806 starting I/O failed 00:29:25.806 Write completed with error (sct=0, sc=8) 00:29:25.806 starting I/O failed 00:29:25.806 Read completed with error (sct=0, sc=8) 00:29:25.806 starting I/O failed 00:29:25.806 Write completed with error (sct=0, sc=8) 00:29:25.806 starting I/O failed 00:29:25.806 Write completed with error (sct=0, sc=8) 00:29:25.806 starting I/O failed 00:29:25.806 Read completed with error (sct=0, sc=8) 00:29:25.806 starting I/O failed 00:29:25.806 Read completed with error (sct=0, sc=8) 00:29:25.806 starting I/O failed 00:29:25.806 Write completed with error (sct=0, sc=8) 00:29:25.806 starting I/O failed 00:29:25.806 Read completed with error (sct=0, sc=8) 00:29:25.806 starting I/O failed 00:29:25.806 Write completed with error (sct=0, sc=8) 00:29:25.806 starting I/O failed 00:29:25.806 Read completed with error (sct=0, sc=8) 00:29:25.806 starting I/O failed 00:29:25.806 Read completed with error (sct=0, sc=8) 00:29:25.806 starting I/O failed 00:29:25.806 Write completed with error (sct=0, sc=8) 00:29:25.806 starting I/O failed 00:29:25.806 Write completed with error (sct=0, sc=8) 00:29:25.806 starting I/O failed 00:29:25.806 Write completed with error (sct=0, sc=8) 00:29:25.806 starting I/O failed 00:29:25.806 Read completed with error (sct=0, sc=8) 00:29:25.806 starting I/O failed 00:29:25.806 Read completed with error (sct=0, sc=8) 00:29:25.807 starting I/O failed 00:29:25.807 Write completed with error (sct=0, sc=8) 00:29:25.807 starting I/O failed 00:29:25.807 Write completed with error (sct=0, sc=8) 00:29:25.807 starting I/O failed 00:29:25.807 Write completed with error (sct=0, sc=8) 00:29:25.807 starting I/O failed 00:29:25.807 Write completed with error (sct=0, sc=8) 00:29:25.807 starting I/O failed 00:29:25.807 Write completed with error (sct=0, sc=8) 00:29:25.807 starting I/O failed 00:29:25.807 Write completed with error (sct=0, sc=8) 00:29:25.807 starting I/O failed 00:29:25.807 Read completed with error (sct=0, sc=8) 00:29:25.807 starting I/O failed 00:29:25.807 Read completed with error (sct=0, sc=8) 00:29:25.807 starting I/O failed 00:29:25.807 [2024-07-12 11:07:42.657982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:25.807 [2024-07-12 11:07:42.658708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.807 [2024-07-12 11:07:42.658770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.807 qpair failed and we were unable to recover it. 00:29:25.807 [2024-07-12 11:07:42.659384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.807 [2024-07-12 11:07:42.659445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.807 qpair failed and we were unable to recover it. 00:29:25.807 [2024-07-12 11:07:42.659825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.807 [2024-07-12 11:07:42.659838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.807 qpair failed and we were unable to recover it. 00:29:25.807 [2024-07-12 11:07:42.660384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.807 [2024-07-12 11:07:42.660443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.807 qpair failed and we were unable to recover it. 00:29:25.807 [2024-07-12 11:07:42.660845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.807 [2024-07-12 11:07:42.660859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.807 qpair failed and we were unable to recover it. 00:29:25.807 [2024-07-12 11:07:42.661381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.807 [2024-07-12 11:07:42.661441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.807 qpair failed and we were unable to recover it. 00:29:25.807 [2024-07-12 11:07:42.661751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.807 [2024-07-12 11:07:42.661764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.807 qpair failed and we were unable to recover it. 00:29:25.807 [2024-07-12 11:07:42.662139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.807 [2024-07-12 11:07:42.662151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.807 qpair failed and we were unable to recover it. 00:29:25.807 [2024-07-12 11:07:42.662567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.807 [2024-07-12 11:07:42.662580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.807 qpair failed and we were unable to recover it. 00:29:25.807 [2024-07-12 11:07:42.663005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.807 [2024-07-12 11:07:42.663022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.807 qpair failed and we were unable to recover it. 00:29:25.807 [2024-07-12 11:07:42.663473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.807 [2024-07-12 11:07:42.663498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.807 qpair failed and we were unable to recover it. 00:29:25.807 [2024-07-12 11:07:42.663888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.807 [2024-07-12 11:07:42.663899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.807 qpair failed and we were unable to recover it. 00:29:25.807 [2024-07-12 11:07:42.664231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.807 [2024-07-12 11:07:42.664243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.807 qpair failed and we were unable to recover it. 00:29:25.807 [2024-07-12 11:07:42.664638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.807 [2024-07-12 11:07:42.664649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.807 qpair failed and we were unable to recover it. 00:29:25.807 [2024-07-12 11:07:42.665000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.807 [2024-07-12 11:07:42.665011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.807 qpair failed and we were unable to recover it. 00:29:25.807 [2024-07-12 11:07:42.665377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.807 [2024-07-12 11:07:42.665388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.807 qpair failed and we were unable to recover it. 00:29:25.807 [2024-07-12 11:07:42.665801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.807 [2024-07-12 11:07:42.665812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.807 qpair failed and we were unable to recover it. 00:29:25.807 [2024-07-12 11:07:42.666225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.807 [2024-07-12 11:07:42.666237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.807 qpair failed and we were unable to recover it. 00:29:25.807 [2024-07-12 11:07:42.666627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.807 [2024-07-12 11:07:42.666638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.807 qpair failed and we were unable to recover it. 00:29:25.807 [2024-07-12 11:07:42.667005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.807 [2024-07-12 11:07:42.667016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.807 qpair failed and we were unable to recover it. 00:29:25.807 [2024-07-12 11:07:42.667448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.807 [2024-07-12 11:07:42.667460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.807 qpair failed and we were unable to recover it. 00:29:25.807 [2024-07-12 11:07:42.667883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.807 [2024-07-12 11:07:42.667895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.807 qpair failed and we were unable to recover it. 00:29:25.807 [2024-07-12 11:07:42.668335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.807 [2024-07-12 11:07:42.668346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.807 qpair failed and we were unable to recover it. 00:29:25.807 [2024-07-12 11:07:42.668779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.807 [2024-07-12 11:07:42.668790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.807 qpair failed and we were unable to recover it. 00:29:25.807 [2024-07-12 11:07:42.669233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.807 [2024-07-12 11:07:42.669244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.807 qpair failed and we were unable to recover it. 00:29:25.807 [2024-07-12 11:07:42.669570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.807 [2024-07-12 11:07:42.669581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.807 qpair failed and we were unable to recover it. 00:29:25.807 [2024-07-12 11:07:42.669954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.807 [2024-07-12 11:07:42.669965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.807 qpair failed and we were unable to recover it. 00:29:25.807 [2024-07-12 11:07:42.670337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.807 [2024-07-12 11:07:42.670350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.807 qpair failed and we were unable to recover it. 00:29:25.807 [2024-07-12 11:07:42.670740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.807 [2024-07-12 11:07:42.670750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.807 qpair failed and we were unable to recover it. 00:29:25.807 [2024-07-12 11:07:42.671045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.807 [2024-07-12 11:07:42.671056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.807 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.671488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.671499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.671896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.671907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.672227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.672239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.672652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.672663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.673016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.673027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.673287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.673299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.673707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.673719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.674046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.674057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.674448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.674460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.674818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.674830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.675205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.675215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.675595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.675605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.675863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.675878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.676168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.676179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.676592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.676603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.676975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.676985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.677338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.677348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.677704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.677714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.677922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.677933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.678344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.678363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.678737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.678747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.679048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.679058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.679316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.679327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.679744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.679756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.679976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.679987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.680187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.680198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.680586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.680596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.681012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.681023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.681418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.681428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.681807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.681817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.682207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.682217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.682575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.682585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.682952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.682962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.683369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.683380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.683752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.683762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.684135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.684145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.684568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.684579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.684984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.684994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.685370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.808 [2024-07-12 11:07:42.685380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.808 qpair failed and we were unable to recover it. 00:29:25.808 [2024-07-12 11:07:42.685771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.809 [2024-07-12 11:07:42.685783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.809 qpair failed and we were unable to recover it. 00:29:25.809 [2024-07-12 11:07:42.686199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.809 [2024-07-12 11:07:42.686210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.809 qpair failed and we were unable to recover it. 00:29:25.809 [2024-07-12 11:07:42.686588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.809 [2024-07-12 11:07:42.686597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.809 qpair failed and we were unable to recover it. 00:29:25.809 [2024-07-12 11:07:42.687021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.809 [2024-07-12 11:07:42.687032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.809 qpair failed and we were unable to recover it. 00:29:25.809 [2024-07-12 11:07:42.687295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.809 [2024-07-12 11:07:42.687306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.809 qpair failed and we were unable to recover it. 00:29:25.809 [2024-07-12 11:07:42.687643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.809 [2024-07-12 11:07:42.687654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.809 qpair failed and we were unable to recover it. 00:29:25.809 [2024-07-12 11:07:42.687931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.809 [2024-07-12 11:07:42.687945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.809 qpair failed and we were unable to recover it. 00:29:25.809 [2024-07-12 11:07:42.688288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.809 [2024-07-12 11:07:42.688302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.809 qpair failed and we were unable to recover it. 00:29:25.809 [2024-07-12 11:07:42.688690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.809 [2024-07-12 11:07:42.688703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.809 qpair failed and we were unable to recover it. 00:29:25.809 [2024-07-12 11:07:42.689072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.809 [2024-07-12 11:07:42.689085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.809 qpair failed and we were unable to recover it. 00:29:25.809 [2024-07-12 11:07:42.689480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.809 [2024-07-12 11:07:42.689494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.809 qpair failed and we were unable to recover it. 00:29:25.809 [2024-07-12 11:07:42.689728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.809 [2024-07-12 11:07:42.689743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.809 qpair failed and we were unable to recover it. 00:29:25.809 [2024-07-12 11:07:42.690130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.809 [2024-07-12 11:07:42.690145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.809 qpair failed and we were unable to recover it. 00:29:25.809 [2024-07-12 11:07:42.690532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.809 [2024-07-12 11:07:42.690545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.809 qpair failed and we were unable to recover it. 00:29:25.809 [2024-07-12 11:07:42.690879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.809 [2024-07-12 11:07:42.690893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.809 qpair failed and we were unable to recover it. 00:29:25.809 [2024-07-12 11:07:42.691268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.809 [2024-07-12 11:07:42.691281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.809 qpair failed and we were unable to recover it. 00:29:25.809 [2024-07-12 11:07:42.691679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.809 [2024-07-12 11:07:42.691693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.809 qpair failed and we were unable to recover it. 00:29:25.809 [2024-07-12 11:07:42.692107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.809 [2024-07-12 11:07:42.692120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.809 qpair failed and we were unable to recover it. 00:29:25.809 [2024-07-12 11:07:42.692529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.809 [2024-07-12 11:07:42.692542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.809 qpair failed and we were unable to recover it. 00:29:25.809 [2024-07-12 11:07:42.692800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.809 [2024-07-12 11:07:42.692814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.809 qpair failed and we were unable to recover it. 00:29:25.809 [2024-07-12 11:07:42.693245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.809 [2024-07-12 11:07:42.693262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.809 qpair failed and we were unable to recover it. 00:29:25.809 [2024-07-12 11:07:42.693658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.809 [2024-07-12 11:07:42.693671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.809 qpair failed and we were unable to recover it. 00:29:25.809 [2024-07-12 11:07:42.693906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.809 [2024-07-12 11:07:42.693919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.809 qpair failed and we were unable to recover it. 00:29:25.809 [2024-07-12 11:07:42.694316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.809 [2024-07-12 11:07:42.694330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.809 qpair failed and we were unable to recover it. 00:29:25.809 [2024-07-12 11:07:42.694695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.809 [2024-07-12 11:07:42.694708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.809 qpair failed and we were unable to recover it. 00:29:25.809 [2024-07-12 11:07:42.695093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.809 [2024-07-12 11:07:42.695108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.809 qpair failed and we were unable to recover it. 00:29:25.809 [2024-07-12 11:07:42.695504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.809 [2024-07-12 11:07:42.695517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.809 qpair failed and we were unable to recover it. 00:29:25.809 [2024-07-12 11:07:42.695759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.809 [2024-07-12 11:07:42.695773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.809 qpair failed and we were unable to recover it. 00:29:25.809 [2024-07-12 11:07:42.696160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.809 [2024-07-12 11:07:42.696174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.809 qpair failed and we were unable to recover it. 00:29:25.809 [2024-07-12 11:07:42.696555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.809 [2024-07-12 11:07:42.696568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.809 qpair failed and we were unable to recover it. 00:29:25.809 [2024-07-12 11:07:42.696941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.809 [2024-07-12 11:07:42.696954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.809 qpair failed and we were unable to recover it. 00:29:25.809 [2024-07-12 11:07:42.697355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.809 [2024-07-12 11:07:42.697368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.809 qpair failed and we were unable to recover it. 00:29:25.809 [2024-07-12 11:07:42.697738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.809 [2024-07-12 11:07:42.697750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.809 qpair failed and we were unable to recover it. 00:29:25.809 [2024-07-12 11:07:42.698176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.809 [2024-07-12 11:07:42.698189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.809 qpair failed and we were unable to recover it. 00:29:25.809 [2024-07-12 11:07:42.698593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.809 [2024-07-12 11:07:42.698610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.809 qpair failed and we were unable to recover it. 00:29:25.809 [2024-07-12 11:07:42.699022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.699039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.699318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.699337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.699747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.699765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.700179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.700198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.700572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.700589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.701004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.701021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.701279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.701299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.701722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.701740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.702120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.702144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.702491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.702508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.702876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.702895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.703303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.703321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.703704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.703723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.704109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.704132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.704517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.704535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.704967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.704984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.705334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.705352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.705749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.705766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.706151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.706168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.706572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.706590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.706981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.706998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.707422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.707440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.707804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.707821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.708230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.708249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.708645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.708663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.709072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.709093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.709526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.709550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.709980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.710003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.710416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.710440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.710836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.710859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.711287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.711310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.711751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.711774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.712223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.712246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.712673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.712695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.713120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.713150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.713545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.713567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.713986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.714008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.714331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.714354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.714762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.714784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.810 [2024-07-12 11:07:42.715209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.810 [2024-07-12 11:07:42.715232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.810 qpair failed and we were unable to recover it. 00:29:25.811 [2024-07-12 11:07:42.715637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.811 [2024-07-12 11:07:42.715660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.811 qpair failed and we were unable to recover it. 00:29:25.811 [2024-07-12 11:07:42.715952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.811 [2024-07-12 11:07:42.715978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.811 qpair failed and we were unable to recover it. 00:29:25.811 [2024-07-12 11:07:42.716369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.811 [2024-07-12 11:07:42.716392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.811 qpair failed and we were unable to recover it. 00:29:25.811 [2024-07-12 11:07:42.716817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.811 [2024-07-12 11:07:42.716839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.811 qpair failed and we were unable to recover it. 00:29:25.811 [2024-07-12 11:07:42.717264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.811 [2024-07-12 11:07:42.717287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.811 qpair failed and we were unable to recover it. 00:29:25.811 [2024-07-12 11:07:42.717647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.811 [2024-07-12 11:07:42.717669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.811 qpair failed and we were unable to recover it. 00:29:25.811 [2024-07-12 11:07:42.717966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.811 [2024-07-12 11:07:42.717991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.811 qpair failed and we were unable to recover it. 00:29:25.811 [2024-07-12 11:07:42.718424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.811 [2024-07-12 11:07:42.718447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.811 qpair failed and we were unable to recover it. 00:29:25.811 [2024-07-12 11:07:42.718888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.811 [2024-07-12 11:07:42.718910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.811 qpair failed and we were unable to recover it. 00:29:25.811 [2024-07-12 11:07:42.719327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.811 [2024-07-12 11:07:42.719351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.811 qpair failed and we were unable to recover it. 00:29:25.811 [2024-07-12 11:07:42.719768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.811 [2024-07-12 11:07:42.719791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.811 qpair failed and we were unable to recover it. 00:29:25.811 [2024-07-12 11:07:42.720228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.811 [2024-07-12 11:07:42.720251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.811 qpair failed and we were unable to recover it. 00:29:25.811 [2024-07-12 11:07:42.720699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.811 [2024-07-12 11:07:42.720722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.811 qpair failed and we were unable to recover it. 00:29:25.811 [2024-07-12 11:07:42.721111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.811 [2024-07-12 11:07:42.721141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.811 qpair failed and we were unable to recover it. 00:29:25.811 [2024-07-12 11:07:42.721549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.811 [2024-07-12 11:07:42.721577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.811 qpair failed and we were unable to recover it. 00:29:25.811 [2024-07-12 11:07:42.722005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.811 [2024-07-12 11:07:42.722032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.811 qpair failed and we were unable to recover it. 00:29:25.811 [2024-07-12 11:07:42.722438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.811 [2024-07-12 11:07:42.722468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.811 qpair failed and we were unable to recover it. 00:29:25.811 [2024-07-12 11:07:42.722876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.811 [2024-07-12 11:07:42.722904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.811 qpair failed and we were unable to recover it. 00:29:25.811 [2024-07-12 11:07:42.723320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.811 [2024-07-12 11:07:42.723349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.811 qpair failed and we were unable to recover it. 00:29:25.811 [2024-07-12 11:07:42.723677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.811 [2024-07-12 11:07:42.723704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.811 qpair failed and we were unable to recover it. 00:29:25.811 [2024-07-12 11:07:42.724145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.811 [2024-07-12 11:07:42.724174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.811 qpair failed and we were unable to recover it. 00:29:25.811 [2024-07-12 11:07:42.724636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.811 [2024-07-12 11:07:42.724664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.811 qpair failed and we were unable to recover it. 00:29:25.811 [2024-07-12 11:07:42.725092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.811 [2024-07-12 11:07:42.725120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.811 qpair failed and we were unable to recover it. 00:29:25.811 [2024-07-12 11:07:42.725548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.811 [2024-07-12 11:07:42.725577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.811 qpair failed and we were unable to recover it. 00:29:25.811 [2024-07-12 11:07:42.726009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.811 [2024-07-12 11:07:42.726038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.811 qpair failed and we were unable to recover it. 00:29:25.811 [2024-07-12 11:07:42.726340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.811 [2024-07-12 11:07:42.726375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.811 qpair failed and we were unable to recover it. 00:29:25.811 [2024-07-12 11:07:42.726824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.811 [2024-07-12 11:07:42.726853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.811 qpair failed and we were unable to recover it. 00:29:25.811 [2024-07-12 11:07:42.727266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.811 [2024-07-12 11:07:42.727320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.811 qpair failed and we were unable to recover it. 00:29:25.811 [2024-07-12 11:07:42.727741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.811 [2024-07-12 11:07:42.727769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.811 qpair failed and we were unable to recover it. 00:29:25.811 [2024-07-12 11:07:42.728082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.811 [2024-07-12 11:07:42.728113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.811 qpair failed and we were unable to recover it. 00:29:25.811 [2024-07-12 11:07:42.728532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.812 [2024-07-12 11:07:42.728560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.812 qpair failed and we were unable to recover it. 00:29:25.812 [2024-07-12 11:07:42.728990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.812 [2024-07-12 11:07:42.729018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.812 qpair failed and we were unable to recover it. 00:29:25.812 [2024-07-12 11:07:42.729413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.812 [2024-07-12 11:07:42.729441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.812 qpair failed and we were unable to recover it. 00:29:25.812 [2024-07-12 11:07:42.729857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.812 [2024-07-12 11:07:42.729884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.812 qpair failed and we were unable to recover it. 00:29:25.812 [2024-07-12 11:07:42.730307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.812 [2024-07-12 11:07:42.730336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.812 qpair failed and we were unable to recover it. 00:29:25.812 [2024-07-12 11:07:42.730765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.812 [2024-07-12 11:07:42.730794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.812 qpair failed and we were unable to recover it. 00:29:25.812 [2024-07-12 11:07:42.731226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.812 [2024-07-12 11:07:42.731255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.812 qpair failed and we were unable to recover it. 00:29:25.812 [2024-07-12 11:07:42.731663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.812 [2024-07-12 11:07:42.731690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.812 qpair failed and we were unable to recover it. 00:29:25.812 [2024-07-12 11:07:42.732065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.812 [2024-07-12 11:07:42.732094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.812 qpair failed and we were unable to recover it. 00:29:25.812 [2024-07-12 11:07:42.732538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.812 [2024-07-12 11:07:42.732567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.812 qpair failed and we were unable to recover it. 00:29:25.812 [2024-07-12 11:07:42.732989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.812 [2024-07-12 11:07:42.733017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.812 qpair failed and we were unable to recover it. 00:29:25.812 [2024-07-12 11:07:42.733325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.812 [2024-07-12 11:07:42.733354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.812 qpair failed and we were unable to recover it. 00:29:25.812 [2024-07-12 11:07:42.733790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.812 [2024-07-12 11:07:42.733818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.812 qpair failed and we were unable to recover it. 00:29:25.812 [2024-07-12 11:07:42.734247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.812 [2024-07-12 11:07:42.734276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.812 qpair failed and we were unable to recover it. 00:29:25.812 [2024-07-12 11:07:42.734588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.812 [2024-07-12 11:07:42.734616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.812 qpair failed and we were unable to recover it. 00:29:25.812 [2024-07-12 11:07:42.735034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.812 [2024-07-12 11:07:42.735064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.812 qpair failed and we were unable to recover it. 00:29:25.812 [2024-07-12 11:07:42.735475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.812 [2024-07-12 11:07:42.735504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.812 qpair failed and we were unable to recover it. 00:29:25.812 [2024-07-12 11:07:42.735938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.812 [2024-07-12 11:07:42.735967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.812 qpair failed and we were unable to recover it. 00:29:25.812 [2024-07-12 11:07:42.736399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.812 [2024-07-12 11:07:42.736428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.812 qpair failed and we were unable to recover it. 00:29:25.812 [2024-07-12 11:07:42.736854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.812 [2024-07-12 11:07:42.736882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.812 qpair failed and we were unable to recover it. 00:29:25.812 [2024-07-12 11:07:42.737311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.812 [2024-07-12 11:07:42.737340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.812 qpair failed and we were unable to recover it. 00:29:25.812 [2024-07-12 11:07:42.737803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.812 [2024-07-12 11:07:42.737830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.812 qpair failed and we were unable to recover it. 00:29:25.812 [2024-07-12 11:07:42.738259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.812 [2024-07-12 11:07:42.738289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.812 qpair failed and we were unable to recover it. 00:29:25.812 [2024-07-12 11:07:42.738699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.812 [2024-07-12 11:07:42.738727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.812 qpair failed and we were unable to recover it. 00:29:25.812 [2024-07-12 11:07:42.739163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.812 [2024-07-12 11:07:42.739192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.812 qpair failed and we were unable to recover it. 00:29:25.812 [2024-07-12 11:07:42.739614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.812 [2024-07-12 11:07:42.739644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.812 qpair failed and we were unable to recover it. 00:29:25.812 [2024-07-12 11:07:42.740083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.812 [2024-07-12 11:07:42.740111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.812 qpair failed and we were unable to recover it. 00:29:25.812 [2024-07-12 11:07:42.740543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.812 [2024-07-12 11:07:42.740572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.812 qpair failed and we were unable to recover it. 00:29:25.812 [2024-07-12 11:07:42.740998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.812 [2024-07-12 11:07:42.741026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.812 qpair failed and we were unable to recover it. 00:29:25.812 [2024-07-12 11:07:42.741437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.812 [2024-07-12 11:07:42.741466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.812 qpair failed and we were unable to recover it. 00:29:25.812 [2024-07-12 11:07:42.741894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.812 [2024-07-12 11:07:42.741921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.812 qpair failed and we were unable to recover it. 00:29:25.812 [2024-07-12 11:07:42.742274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.812 [2024-07-12 11:07:42.742305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.812 qpair failed and we were unable to recover it. 00:29:25.812 [2024-07-12 11:07:42.742726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.812 [2024-07-12 11:07:42.742754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.812 qpair failed and we were unable to recover it. 00:29:25.812 [2024-07-12 11:07:42.743183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.812 [2024-07-12 11:07:42.743212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.812 qpair failed and we were unable to recover it. 00:29:25.812 [2024-07-12 11:07:42.743637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.812 [2024-07-12 11:07:42.743664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.812 qpair failed and we were unable to recover it. 00:29:25.812 [2024-07-12 11:07:42.744103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.812 [2024-07-12 11:07:42.744145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.812 qpair failed and we were unable to recover it. 00:29:25.812 [2024-07-12 11:07:42.744610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.744638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.813 qpair failed and we were unable to recover it. 00:29:25.813 [2024-07-12 11:07:42.745065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.745093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.813 qpair failed and we were unable to recover it. 00:29:25.813 [2024-07-12 11:07:42.745517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.745546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.813 qpair failed and we were unable to recover it. 00:29:25.813 [2024-07-12 11:07:42.745841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.745874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.813 qpair failed and we were unable to recover it. 00:29:25.813 [2024-07-12 11:07:42.746231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.746260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.813 qpair failed and we were unable to recover it. 00:29:25.813 [2024-07-12 11:07:42.746667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.746695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.813 qpair failed and we were unable to recover it. 00:29:25.813 [2024-07-12 11:07:42.747140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.747170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.813 qpair failed and we were unable to recover it. 00:29:25.813 [2024-07-12 11:07:42.747613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.747640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.813 qpair failed and we were unable to recover it. 00:29:25.813 [2024-07-12 11:07:42.748050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.748078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.813 qpair failed and we were unable to recover it. 00:29:25.813 [2024-07-12 11:07:42.748516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.748545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.813 qpair failed and we were unable to recover it. 00:29:25.813 [2024-07-12 11:07:42.748967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.748995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.813 qpair failed and we were unable to recover it. 00:29:25.813 [2024-07-12 11:07:42.749413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.749442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.813 qpair failed and we were unable to recover it. 00:29:25.813 [2024-07-12 11:07:42.749932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.749960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.813 qpair failed and we were unable to recover it. 00:29:25.813 [2024-07-12 11:07:42.750352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.750382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.813 qpair failed and we were unable to recover it. 00:29:25.813 [2024-07-12 11:07:42.750794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.750821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.813 qpair failed and we were unable to recover it. 00:29:25.813 [2024-07-12 11:07:42.751231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.751260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.813 qpair failed and we were unable to recover it. 00:29:25.813 [2024-07-12 11:07:42.751694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.751724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.813 qpair failed and we were unable to recover it. 00:29:25.813 [2024-07-12 11:07:42.752152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.752182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.813 qpair failed and we were unable to recover it. 00:29:25.813 [2024-07-12 11:07:42.752637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.752665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.813 qpair failed and we were unable to recover it. 00:29:25.813 [2024-07-12 11:07:42.753088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.753115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.813 qpair failed and we were unable to recover it. 00:29:25.813 [2024-07-12 11:07:42.753525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.753553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.813 qpair failed and we were unable to recover it. 00:29:25.813 [2024-07-12 11:07:42.753983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.754012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.813 qpair failed and we were unable to recover it. 00:29:25.813 [2024-07-12 11:07:42.754463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.754492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.813 qpair failed and we were unable to recover it. 00:29:25.813 [2024-07-12 11:07:42.754911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.754940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.813 qpair failed and we were unable to recover it. 00:29:25.813 [2024-07-12 11:07:42.755364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.755392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.813 qpair failed and we were unable to recover it. 00:29:25.813 [2024-07-12 11:07:42.755807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.755835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.813 qpair failed and we were unable to recover it. 00:29:25.813 [2024-07-12 11:07:42.756259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.756289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.813 qpair failed and we were unable to recover it. 00:29:25.813 [2024-07-12 11:07:42.756728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.756756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.813 qpair failed and we were unable to recover it. 00:29:25.813 [2024-07-12 11:07:42.757182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.757210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.813 qpair failed and we were unable to recover it. 00:29:25.813 [2024-07-12 11:07:42.757650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.757678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.813 qpair failed and we were unable to recover it. 00:29:25.813 [2024-07-12 11:07:42.758146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.758176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.813 qpair failed and we were unable to recover it. 00:29:25.813 [2024-07-12 11:07:42.758588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.758617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.813 qpair failed and we were unable to recover it. 00:29:25.813 [2024-07-12 11:07:42.759048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.759076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.813 qpair failed and we were unable to recover it. 00:29:25.813 [2024-07-12 11:07:42.759539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.759568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.813 qpair failed and we were unable to recover it. 00:29:25.813 [2024-07-12 11:07:42.759996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.760023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.813 qpair failed and we were unable to recover it. 00:29:25.813 [2024-07-12 11:07:42.760420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.760448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.813 qpair failed and we were unable to recover it. 00:29:25.813 [2024-07-12 11:07:42.760872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.760901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.813 qpair failed and we were unable to recover it. 00:29:25.813 [2024-07-12 11:07:42.761252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.813 [2024-07-12 11:07:42.761282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.814 qpair failed and we were unable to recover it. 00:29:25.814 [2024-07-12 11:07:42.761721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.814 [2024-07-12 11:07:42.761749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.814 qpair failed and we were unable to recover it. 00:29:25.814 [2024-07-12 11:07:42.762174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.814 [2024-07-12 11:07:42.762214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.814 qpair failed and we were unable to recover it. 00:29:25.814 [2024-07-12 11:07:42.762656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.814 [2024-07-12 11:07:42.762684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.814 qpair failed and we were unable to recover it. 00:29:25.814 [2024-07-12 11:07:42.763161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.814 [2024-07-12 11:07:42.763190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.814 qpair failed and we were unable to recover it. 00:29:25.814 [2024-07-12 11:07:42.763617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.814 [2024-07-12 11:07:42.763645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.814 qpair failed and we were unable to recover it. 00:29:25.814 [2024-07-12 11:07:42.764071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.814 [2024-07-12 11:07:42.764099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.814 qpair failed and we were unable to recover it. 00:29:25.814 [2024-07-12 11:07:42.764408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.814 [2024-07-12 11:07:42.764439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.814 qpair failed and we were unable to recover it. 00:29:25.814 [2024-07-12 11:07:42.764849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.814 [2024-07-12 11:07:42.764878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.814 qpair failed and we were unable to recover it. 00:29:25.814 [2024-07-12 11:07:42.765296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.814 [2024-07-12 11:07:42.765324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.814 qpair failed and we were unable to recover it. 00:29:25.814 [2024-07-12 11:07:42.765721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.814 [2024-07-12 11:07:42.765748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.814 qpair failed and we were unable to recover it. 00:29:25.814 [2024-07-12 11:07:42.766055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.814 [2024-07-12 11:07:42.766086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.814 qpair failed and we were unable to recover it. 00:29:25.814 [2024-07-12 11:07:42.766518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.814 [2024-07-12 11:07:42.766548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.814 qpair failed and we were unable to recover it. 00:29:25.814 [2024-07-12 11:07:42.766842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.814 [2024-07-12 11:07:42.766871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.814 qpair failed and we were unable to recover it. 00:29:25.814 [2024-07-12 11:07:42.767278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.814 [2024-07-12 11:07:42.767307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.814 qpair failed and we were unable to recover it. 00:29:25.814 [2024-07-12 11:07:42.767744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.814 [2024-07-12 11:07:42.767772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.814 qpair failed and we were unable to recover it. 00:29:25.814 [2024-07-12 11:07:42.768200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.814 [2024-07-12 11:07:42.768229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.814 qpair failed and we were unable to recover it. 00:29:25.814 [2024-07-12 11:07:42.768634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.814 [2024-07-12 11:07:42.768662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.814 qpair failed and we were unable to recover it. 00:29:25.814 [2024-07-12 11:07:42.769087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.814 [2024-07-12 11:07:42.769115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.814 qpair failed and we were unable to recover it. 00:29:25.814 [2024-07-12 11:07:42.769480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.814 [2024-07-12 11:07:42.769508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.814 qpair failed and we were unable to recover it. 00:29:25.814 [2024-07-12 11:07:42.769945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.814 [2024-07-12 11:07:42.769973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.814 qpair failed and we were unable to recover it. 00:29:25.814 [2024-07-12 11:07:42.770451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.814 [2024-07-12 11:07:42.770480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.814 qpair failed and we were unable to recover it. 00:29:25.814 [2024-07-12 11:07:42.770899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.814 [2024-07-12 11:07:42.770927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.814 qpair failed and we were unable to recover it. 00:29:25.814 [2024-07-12 11:07:42.771232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.814 [2024-07-12 11:07:42.771265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.814 qpair failed and we were unable to recover it. 00:29:25.814 [2024-07-12 11:07:42.771713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.814 [2024-07-12 11:07:42.771743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.814 qpair failed and we were unable to recover it. 00:29:25.814 [2024-07-12 11:07:42.772179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.814 [2024-07-12 11:07:42.772207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.814 qpair failed and we were unable to recover it. 00:29:25.814 [2024-07-12 11:07:42.772643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.814 [2024-07-12 11:07:42.772671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.814 qpair failed and we were unable to recover it. 00:29:25.814 [2024-07-12 11:07:42.773103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.814 [2024-07-12 11:07:42.773139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.814 qpair failed and we were unable to recover it. 00:29:25.814 [2024-07-12 11:07:42.773555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.814 [2024-07-12 11:07:42.773584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.814 qpair failed and we were unable to recover it. 00:29:25.814 [2024-07-12 11:07:42.773991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.814 [2024-07-12 11:07:42.774026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.814 qpair failed and we were unable to recover it. 00:29:25.814 [2024-07-12 11:07:42.774315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.814 [2024-07-12 11:07:42.774345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.814 qpair failed and we were unable to recover it. 00:29:25.814 [2024-07-12 11:07:42.774764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.814 [2024-07-12 11:07:42.774792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.814 qpair failed and we were unable to recover it. 00:29:25.814 [2024-07-12 11:07:42.775220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.814 [2024-07-12 11:07:42.775249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.814 qpair failed and we were unable to recover it. 00:29:25.814 [2024-07-12 11:07:42.775675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.814 [2024-07-12 11:07:42.775703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.814 qpair failed and we were unable to recover it. 00:29:25.814 [2024-07-12 11:07:42.776135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.814 [2024-07-12 11:07:42.776164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.814 qpair failed and we were unable to recover it. 00:29:25.814 [2024-07-12 11:07:42.776632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.814 [2024-07-12 11:07:42.776660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.814 qpair failed and we were unable to recover it. 00:29:25.814 [2024-07-12 11:07:42.777054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.814 [2024-07-12 11:07:42.777082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.814 qpair failed and we were unable to recover it. 00:29:25.814 [2024-07-12 11:07:42.777564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.814 [2024-07-12 11:07:42.777593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.815 qpair failed and we were unable to recover it. 00:29:25.815 [2024-07-12 11:07:42.778021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.815 [2024-07-12 11:07:42.778049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.815 qpair failed and we were unable to recover it. 00:29:25.815 [2024-07-12 11:07:42.778459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.815 [2024-07-12 11:07:42.778488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.815 qpair failed and we were unable to recover it. 00:29:25.815 [2024-07-12 11:07:42.778909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.815 [2024-07-12 11:07:42.778939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.815 qpair failed and we were unable to recover it. 00:29:25.815 [2024-07-12 11:07:42.779412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.815 [2024-07-12 11:07:42.779442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.815 qpair failed and we were unable to recover it. 00:29:25.815 [2024-07-12 11:07:42.779870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.815 [2024-07-12 11:07:42.779898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.815 qpair failed and we were unable to recover it. 00:29:25.815 [2024-07-12 11:07:42.780200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.815 [2024-07-12 11:07:42.780230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.815 qpair failed and we were unable to recover it. 00:29:25.815 [2024-07-12 11:07:42.780655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.815 [2024-07-12 11:07:42.780684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.815 qpair failed and we were unable to recover it. 00:29:25.815 [2024-07-12 11:07:42.781101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.815 [2024-07-12 11:07:42.781138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.815 qpair failed and we were unable to recover it. 00:29:25.815 [2024-07-12 11:07:42.781563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.815 [2024-07-12 11:07:42.781591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.815 qpair failed and we were unable to recover it. 00:29:25.815 [2024-07-12 11:07:42.781986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.815 [2024-07-12 11:07:42.782015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.815 qpair failed and we were unable to recover it. 00:29:25.815 [2024-07-12 11:07:42.782424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.815 [2024-07-12 11:07:42.782453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.815 qpair failed and we were unable to recover it. 00:29:25.815 [2024-07-12 11:07:42.782853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.815 [2024-07-12 11:07:42.782882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.815 qpair failed and we were unable to recover it. 00:29:25.815 [2024-07-12 11:07:42.783309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.815 [2024-07-12 11:07:42.783339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.815 qpair failed and we were unable to recover it. 00:29:25.815 [2024-07-12 11:07:42.783703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.815 [2024-07-12 11:07:42.783731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.815 qpair failed and we were unable to recover it. 00:29:25.815 [2024-07-12 11:07:42.784235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.815 [2024-07-12 11:07:42.784264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.815 qpair failed and we were unable to recover it. 00:29:25.815 [2024-07-12 11:07:42.784636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.815 [2024-07-12 11:07:42.784664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:25.815 qpair failed and we were unable to recover it. 00:29:26.086 [2024-07-12 11:07:42.785079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.086 [2024-07-12 11:07:42.785110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.086 qpair failed and we were unable to recover it. 00:29:26.086 [2024-07-12 11:07:42.785577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.086 [2024-07-12 11:07:42.785605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.086 qpair failed and we were unable to recover it. 00:29:26.086 [2024-07-12 11:07:42.785913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.086 [2024-07-12 11:07:42.785941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.086 qpair failed and we were unable to recover it. 00:29:26.086 [2024-07-12 11:07:42.786452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.086 [2024-07-12 11:07:42.786481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.086 qpair failed and we were unable to recover it. 00:29:26.086 [2024-07-12 11:07:42.786898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.086 [2024-07-12 11:07:42.786927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.086 qpair failed and we were unable to recover it. 00:29:26.086 [2024-07-12 11:07:42.787351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.086 [2024-07-12 11:07:42.787381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.086 qpair failed and we were unable to recover it. 00:29:26.086 [2024-07-12 11:07:42.787800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.086 [2024-07-12 11:07:42.787828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.086 qpair failed and we were unable to recover it. 00:29:26.086 [2024-07-12 11:07:42.788323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.086 [2024-07-12 11:07:42.788352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.086 qpair failed and we were unable to recover it. 00:29:26.086 [2024-07-12 11:07:42.788780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.086 [2024-07-12 11:07:42.788808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.086 qpair failed and we were unable to recover it. 00:29:26.086 [2024-07-12 11:07:42.789228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.086 [2024-07-12 11:07:42.789257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.086 qpair failed and we were unable to recover it. 00:29:26.086 [2024-07-12 11:07:42.789665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.086 [2024-07-12 11:07:42.789692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.086 qpair failed and we were unable to recover it. 00:29:26.086 [2024-07-12 11:07:42.790113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.086 [2024-07-12 11:07:42.790148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.086 qpair failed and we were unable to recover it. 00:29:26.086 [2024-07-12 11:07:42.790559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.086 [2024-07-12 11:07:42.790588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.086 qpair failed and we were unable to recover it. 00:29:26.086 [2024-07-12 11:07:42.791009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.086 [2024-07-12 11:07:42.791038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.086 qpair failed and we were unable to recover it. 00:29:26.086 [2024-07-12 11:07:42.791484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.086 [2024-07-12 11:07:42.791514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.086 qpair failed and we were unable to recover it. 00:29:26.086 [2024-07-12 11:07:42.791926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.086 [2024-07-12 11:07:42.791960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.086 qpair failed and we were unable to recover it. 00:29:26.086 [2024-07-12 11:07:42.792370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.086 [2024-07-12 11:07:42.792399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.086 qpair failed and we were unable to recover it. 00:29:26.086 [2024-07-12 11:07:42.792828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.792856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.793281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.793309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.793677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.793705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.794108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.794143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.794556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.794583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.795007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.795035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.795458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.795487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.795906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.795934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.796253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.796283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.796716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.796743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.797221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.797249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.797568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.797597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.798028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.798056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.798501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.798529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.798998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.799027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.799436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.799465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.799894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.799922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.800350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.800379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.800810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.800838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.801263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.801293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.801741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.801769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.802196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.802224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.802662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.802692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.803112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.803150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.803552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.803581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.804008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.804037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.804445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.804474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.804904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.804931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.805222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.805252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.805707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.805735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.806146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.806175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.806609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.806637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.807074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.807102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.807521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.807550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.807978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.808007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.808446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.808476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.808899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.808927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.809366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.809400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.809809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.809844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.810281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.810310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.810747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.810775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.811292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.811322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.811606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.811636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.812069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.812097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.812521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.812550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.812957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.812985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.813416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.813445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.813866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.813895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.814324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.814352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.814777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.814804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.815178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.815208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.815631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.815659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.816088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.816117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.816558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.816587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.817017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.817045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.817531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.817560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.817959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.817987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.818333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.818361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.818794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.818823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.819246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.087 [2024-07-12 11:07:42.819274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.087 qpair failed and we were unable to recover it. 00:29:26.087 [2024-07-12 11:07:42.819677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.819705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.820143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.820171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.820607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.820636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.821059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.821087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.821514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.821543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.821971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.821999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.822447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.822477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.822901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.822930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.823334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.823362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.823814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.823843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.824266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.824296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.824740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.824769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.825196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.825225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.825529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.825558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.825987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.826015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.826494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.826523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.826919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.826948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.827350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.827379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.827806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.827840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.828251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.828280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.828786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.828815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.829213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.829242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.829678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.829707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.830133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.830163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.830580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.830609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.831034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.831062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.831540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.831569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.831993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.832022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.832431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.832459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.832905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.832933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.833374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.833402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.833828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.833856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.834284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.834313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.834722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.834751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.835169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.835200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.835689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.835719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.836159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.836189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.836597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.836625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.837039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.837067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.837483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.837513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.837943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.837971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.838372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.838401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.838837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.838865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.839263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.839293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.839717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.839745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.840170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.840200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.840643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.840672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.841106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.841142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.841550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.841578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.841993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.842021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.842435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.842464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.842897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.842925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.843361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.843390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.843826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.843855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.844289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.844318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.844749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.844777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.845207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.845236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.845650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.845678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.846165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.846200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.846603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.088 [2024-07-12 11:07:42.846631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.088 qpair failed and we were unable to recover it. 00:29:26.088 [2024-07-12 11:07:42.846917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.846948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.847356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.847385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.847816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.847844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.848272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.848302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.848607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.848635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.849058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.849087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.849503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.849532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.849959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.849987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.850412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.850440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.850915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.850943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.851379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.851408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.851837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.851865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.852179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.852211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.852620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.852650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.853085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.853113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.853539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.853568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.853985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.854014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.854446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.854475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.854905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.854933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.855340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.855369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.855790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.855818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.856241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.856270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.856789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.856817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.857248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.857277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.857691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.857719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.858144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.858174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.858587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.858615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.859043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.859071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.859382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.859410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.859703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.859733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.860168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.860198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.860621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.860649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.861072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.861100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.861611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.861640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.862027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.862054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.862533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.862562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.862984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.863012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.863298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.863329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.863732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.863766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.864169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.864199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.864728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.864755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.865202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.865231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.865662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.865690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.866130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.866159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.866567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.866595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.867018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.867047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.867467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.867496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.867924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.867952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.868369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.868398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.868841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.868869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.869284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.869313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.089 qpair failed and we were unable to recover it. 00:29:26.089 [2024-07-12 11:07:42.869726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.089 [2024-07-12 11:07:42.869756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.870182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.870212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.870634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.870662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.871058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.871086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.871514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.871543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.871973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.872002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.872443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.872473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.872899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.872926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.873356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.873385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.873806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.873834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.874259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.874287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.874707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.874735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.875108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.875144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.875556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.875585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.876007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.876036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.876348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.876377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.876820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.876848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.877319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.877348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.877832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.877860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.878288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.878316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.878747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.878775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.879200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.879229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.879653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.879681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.880108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.880163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.880554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.880582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.881054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.881082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.881566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.881595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.882021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.882055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.882463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.882492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.882886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.882913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.883359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.883388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.883817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.883845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.884274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.884302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.884741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.884769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.885174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.885203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.885611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.885639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.886130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.886159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.886596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.886624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.887055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.887083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.887507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.887535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.887867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.887894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.888320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.888350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.888757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.888785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.889204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.889233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.889667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.889696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.890189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.890217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.890642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.890671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.891112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.891147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.891557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.891585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.891906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.891933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.892354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.892383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.892811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.892839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.893281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.893310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.893743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.893771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.894198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.894227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.894658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.894686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.895114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.895151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.895453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.895483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.895905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.895932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.896334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.896362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.896788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.090 [2024-07-12 11:07:42.896815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.090 qpair failed and we were unable to recover it. 00:29:26.090 [2024-07-12 11:07:42.897245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.897273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.897636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.897665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.898098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.898134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.898558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.898586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.899009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.899037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.899456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.899485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.899782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.899824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.900252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.900281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.900691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.900718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.901149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.901177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.901555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.901582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.901992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.902019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.902417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.902446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.902887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.902915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.903336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.903364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.903790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.903817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.904244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.904273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.904637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.904665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.905089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.905118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.905562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.905591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.906006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.906034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.906437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.906466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.906899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.906927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.907355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.907383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.907790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.907817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.908244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.908272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.908569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.908599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.909081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.909110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.909395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.909424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.909839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.909867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.910270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.910298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.910725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.910752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.911118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.911154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.911590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.911619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.912035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.912063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.912479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.912508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.912950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.912977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.913456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.913485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.913900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.913927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.914349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.914378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.914690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.914718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.915158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.915186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.915619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.915646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.916048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.916076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.916572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.916601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.916889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.916918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.917419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.917457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.917753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.917781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.918085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.918113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.918432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.918461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.918896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.918924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.919360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.919388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.919831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.919860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.920264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.920294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.920715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.920743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.921052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.921080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.921448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.921477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.921917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.921944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.922373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.922402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.922802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.922829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.923273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.091 [2024-07-12 11:07:42.923303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.091 qpair failed and we were unable to recover it. 00:29:26.091 [2024-07-12 11:07:42.923676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.092 [2024-07-12 11:07:42.923705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.092 qpair failed and we were unable to recover it. 00:29:26.092 [2024-07-12 11:07:42.924103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.092 [2024-07-12 11:07:42.924137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.092 qpair failed and we were unable to recover it. 00:29:26.092 [2024-07-12 11:07:42.924499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.092 [2024-07-12 11:07:42.924527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.092 qpair failed and we were unable to recover it. 00:29:26.092 [2024-07-12 11:07:42.924939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.092 [2024-07-12 11:07:42.924967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.092 qpair failed and we were unable to recover it. 00:29:26.092 [2024-07-12 11:07:42.925281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.092 [2024-07-12 11:07:42.925309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.092 qpair failed and we were unable to recover it. 00:29:26.092 [2024-07-12 11:07:42.925724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.092 [2024-07-12 11:07:42.925752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.092 qpair failed and we were unable to recover it. 00:29:26.092 [2024-07-12 11:07:42.926169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.092 [2024-07-12 11:07:42.926198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.092 qpair failed and we were unable to recover it. 00:29:26.092 [2024-07-12 11:07:42.926632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.092 [2024-07-12 11:07:42.926660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.092 qpair failed and we were unable to recover it. 00:29:26.092 [2024-07-12 11:07:42.927023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.092 [2024-07-12 11:07:42.927051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.092 qpair failed and we were unable to recover it. 00:29:26.092 [2024-07-12 11:07:42.927488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.092 [2024-07-12 11:07:42.927517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.092 qpair failed and we were unable to recover it. 00:29:26.092 [2024-07-12 11:07:42.927956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.092 [2024-07-12 11:07:42.927983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.092 qpair failed and we were unable to recover it. 00:29:26.092 [2024-07-12 11:07:42.928436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.092 [2024-07-12 11:07:42.928465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.092 qpair failed and we were unable to recover it. 00:29:26.092 [2024-07-12 11:07:42.928906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.092 [2024-07-12 11:07:42.928935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.092 qpair failed and we were unable to recover it. 00:29:26.092 [2024-07-12 11:07:42.929350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.092 [2024-07-12 11:07:42.929379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.092 qpair failed and we were unable to recover it. 00:29:26.092 [2024-07-12 11:07:42.929807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.092 [2024-07-12 11:07:42.929835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.092 qpair failed and we were unable to recover it. 00:29:26.092 [2024-07-12 11:07:42.930118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.092 [2024-07-12 11:07:42.930157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.092 qpair failed and we were unable to recover it. 00:29:26.092 [2024-07-12 11:07:42.930568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.092 [2024-07-12 11:07:42.930596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.092 qpair failed and we were unable to recover it. 00:29:26.092 [2024-07-12 11:07:42.931026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.092 [2024-07-12 11:07:42.931054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.092 qpair failed and we were unable to recover it. 00:29:26.092 [2024-07-12 11:07:42.931468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.092 [2024-07-12 11:07:42.931498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.092 qpair failed and we were unable to recover it. 00:29:26.092 [2024-07-12 11:07:42.931968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.092 [2024-07-12 11:07:42.931996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.092 qpair failed and we were unable to recover it. 00:29:26.092 [2024-07-12 11:07:42.932477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.092 [2024-07-12 11:07:42.932505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.092 qpair failed and we were unable to recover it. 00:29:26.092 [2024-07-12 11:07:42.932932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.092 [2024-07-12 11:07:42.932961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.092 qpair failed and we were unable to recover it. 00:29:26.092 [2024-07-12 11:07:42.933264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.092 [2024-07-12 11:07:42.933293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.092 qpair failed and we were unable to recover it. 00:29:26.092 [2024-07-12 11:07:42.933642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.092 [2024-07-12 11:07:42.933671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.092 qpair failed and we were unable to recover it. 00:29:26.092 [2024-07-12 11:07:42.934090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.092 [2024-07-12 11:07:42.934117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.092 qpair failed and we were unable to recover it. 00:29:26.092 [2024-07-12 11:07:42.934571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.092 [2024-07-12 11:07:42.934605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.092 qpair failed and we were unable to recover it. 00:29:26.092 [2024-07-12 11:07:42.934898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.092 [2024-07-12 11:07:42.934928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.092 qpair failed and we were unable to recover it. 00:29:26.092 [2024-07-12 11:07:42.935386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.092 [2024-07-12 11:07:42.935415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.935842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.935870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.936299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.936327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.936751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.936779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.937181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.937210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.937417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.937446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.937755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.937783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.938094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.938129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.938470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.938497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.938924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.938953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.939170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.939198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.939552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.939580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.940007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.940036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.940460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.940489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.940912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.940940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.941242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.941271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.941699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.941727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.942154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.942183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.942610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.942638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.943070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.943098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.943447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.943476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.943932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.943960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.944393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.944422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.944831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.944859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.945159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.945189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.945679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.945708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.946137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.946166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.946504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.946535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.947044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.947072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.947359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.947389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.947875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.947904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.948074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.948101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.948533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.948562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.948987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.949015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.949468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.949497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.949933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.949961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.950361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.950390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.950802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.950830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.951102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.951144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.951558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.951586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.951895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.951924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.952354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.952382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.952682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.952719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.952969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.952997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.953416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.953446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.953869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.953897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.954322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.954351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.954822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.954850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.955190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.955219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.955549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.955580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.956009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.956038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.956515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.956543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.956986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.957014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.957425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.957454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.957895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.957925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.093 [2024-07-12 11:07:42.958233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.093 [2024-07-12 11:07:42.958264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.093 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.958682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.958711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.959144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.959174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.959495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.959523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.959830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.959868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.960260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.960289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.960714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.960742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.961174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.961203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.961634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.961662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.962094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.962129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.962440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.962469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.962774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.962805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.963198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.963227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.963660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.963688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.964187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.964215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.964626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.964653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.965078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.965106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.965575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.965604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.966015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.966044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.966462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.966491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.966774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.966803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.967237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.967267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.967690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.967720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.968111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.968161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.968616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.968645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.969000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.969028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.969311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.969340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.969677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.969706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.970136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.970166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.970609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.970637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.971035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.971063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.971371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.971400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.971894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.971923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.972373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.972402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.972823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.972851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.973037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.973067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.973493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.973524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.973971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.974001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.974498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.974529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.974940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.974969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.975397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.975427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.975856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.975885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.976308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.976337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.976763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.976791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.977222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.977253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.977682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.977711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.978143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.978172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.978496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.978524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.978850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.978878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.979262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.979292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.979709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.979738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.980165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.980194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.980649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.980678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.981106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.981163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.981592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.981620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.982006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.982036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.982444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.982472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.982776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.982806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.983253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.983282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.983696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.983725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.984159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.984189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.094 [2024-07-12 11:07:42.984616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.094 [2024-07-12 11:07:42.984644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.094 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:42.985091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:42.985120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:42.985415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:42.985454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:42.985865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:42.985894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:42.986327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:42.986356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:42.986778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:42.986806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:42.987249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:42.987279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:42.987561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:42.987591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:42.987924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:42.987953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:42.988356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:42.988385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:42.988753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:42.988780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:42.989178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:42.989207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:42.989626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:42.989656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:42.990054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:42.990083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:42.990539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:42.990568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:42.990988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:42.991017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:42.991431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:42.991461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:42.991888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:42.991916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:42.992351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:42.992380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:42.992810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:42.992838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:42.993264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:42.993293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:42.993754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:42.993786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:42.994197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:42.994226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:42.994652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:42.994681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:42.995101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:42.995139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:42.995531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:42.995559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:42.995990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:42.996018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:42.996304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:42.996335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:42.996790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:42.996818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:42.997216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:42.997251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:42.997662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:42.997691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:42.998140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:42.998169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:42.998581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:42.998610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:42.999027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:42.999056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:42.999483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:42.999513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:42.999940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:42.999968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:43.000375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:43.000406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:43.000742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:43.000771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:43.001186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:43.001217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:43.001609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:43.001639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:43.002066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:43.002094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:43.002407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:43.002437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:43.002845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:43.002873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:43.003289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:43.003318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:43.003722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:43.003751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:43.004170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:43.004201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:43.004623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:43.004652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:43.005044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:43.005073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:43.005509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:43.005540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:43.006005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:43.006033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:43.006475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:43.006503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:43.006938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:43.006966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:43.007388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:43.007416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:43.007730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:43.007759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:43.008170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.095 [2024-07-12 11:07:43.008200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.095 qpair failed and we were unable to recover it. 00:29:26.095 [2024-07-12 11:07:43.008633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.008661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.009104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.009143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.009462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.009490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.009947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.009977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.010402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.010432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.010855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.010883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.011196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.011225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.011716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.011744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.012171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.012200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.012627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.012656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.013089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.013119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.013552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.013580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.014006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.014034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.014478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.014508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.014927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.014963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.015379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.015409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.015835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.015864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.016292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.016321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.016746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.016775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.017195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.017224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.017532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.017562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.017985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.018016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.018410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.018440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.018863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.018892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.019187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.019216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.019659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.019690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.019968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.019997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.020417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.020448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.020905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.020933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.021341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.021370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.021776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.021805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.022227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.022258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.022701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.022730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.023228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.023258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.023738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.023766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.024194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.024223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.024624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.024652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.025114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.025151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.025586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.025615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.026044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.026072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.026501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.026530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.026959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.026988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.027418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.027447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.027660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.027691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.028099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.028146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.028600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.028628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.029057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.029086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.029507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.029536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.029963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.029992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.030427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.030456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.030818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.030848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.031259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.031289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.031722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.031750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.032170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.032201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.032621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.032657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.033083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.033111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.033541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.033570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.033988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.034016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.034418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.034447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.034858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.034887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.096 [2024-07-12 11:07:43.035309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.096 [2024-07-12 11:07:43.035339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.096 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.035759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.035789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.036191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.036219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.036656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.036684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.037106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.037146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.037590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.037618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.038045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.038073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.038508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.038537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.038975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.039004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.039421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.039450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.039870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.039898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.040321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.040351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.040781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.040810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.041247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.041276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.041728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.041756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.042152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.042183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.042640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.042669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.043012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.043041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.043489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.043518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.043934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.043962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.044387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.044416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.044773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.044801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.045224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.045254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.045688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.045716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.046132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.046161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.046500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.046529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.046910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.046938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.047229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.047260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.047668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.047697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.048056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.048084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.048548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.048577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.049005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.049033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.049434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.049463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.049880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.049908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.050380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.050415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.050824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.050853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.051270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.051300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.051717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.051746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.052193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.052222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.052650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.052678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.053104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.053139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.053539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.053566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.053996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.054025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.054314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.054345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.054774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.054801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.055226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.055255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.055688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.055717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.056185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.056214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.056705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.056734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.057172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.057202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.057659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.057687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.058113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.058150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.058559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.058587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.059020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.059048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.059507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.059536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.059970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.059997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.097 qpair failed and we were unable to recover it. 00:29:26.097 [2024-07-12 11:07:43.060459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.097 [2024-07-12 11:07:43.060488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.060888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.060921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.061317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.061346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.061775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.061803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.062202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.062231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.062529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.062560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.062975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.063006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.063416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.063446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.063870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.063898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.064286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.064315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.064753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.064781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.065144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.065173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.065590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.065620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.066049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.066077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.066514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.066543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.066978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.067007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.067418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.067449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.067747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.067774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.068226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.068263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.068683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.068712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.069159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.069188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.069488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.069519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.069935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.069964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.070331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.070360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.070782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.070811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.071231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.071260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.071576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.071603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.072031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.072059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.072500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.072528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.072955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.072983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.073396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.073425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.073839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.073867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.074283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.074313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.074754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.074782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.075182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.075211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.075640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.075668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.076097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.076135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.076560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.076588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.076902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.076930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.077389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.077418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.077839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.077867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.078302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.078331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.078830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.078859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.079277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.079308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.079715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.079746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.080171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.080201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.368 [2024-07-12 11:07:43.080631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.368 [2024-07-12 11:07:43.080660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.368 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.081055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.081083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.081509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.081538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.081963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.081991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.082437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.082467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.082874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.082906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.083336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.083365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.083779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.083808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.084237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.084266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.084690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.084718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.085149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.085178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.085471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.085501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.085911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.085946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.086356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.086386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.086805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.086835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.087235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.087267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.087689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.087720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.088148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.088177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.088638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.088666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.088972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.089004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.089423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.089452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.089891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.089919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.090393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.090422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.090800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.090828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.091289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.091318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.091719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.091747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.092160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.092190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.092657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.092685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.093172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.093201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.093615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.093643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.094076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.094104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.094558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.094587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.094992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.095021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.095449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.095480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.095892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.095921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.096219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.096250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.096655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.096684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.097129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.097158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.097587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.097615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.098031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.098061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.098404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.098433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.098879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.098908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.099353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.099384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.099817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.099844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.100188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.100240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.100680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.100709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.101136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.101165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.101607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.101635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.102061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.102089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.102598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.102628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.103159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.103190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.103609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.103637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.104041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.104077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.104491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.104520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.104944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.104971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.105388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.105417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.105880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.105909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.106470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.106572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.107084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.107118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.107572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.107603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.108076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.108105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.108555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.108584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.109031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.109060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.109375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.109413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.109852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.109881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.110297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.110327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.110756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.110785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.111216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.111247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.111676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.111704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.112101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.112139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.112539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.112567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.112974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.113002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.113322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.113351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.113701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.113729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.114148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.114177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.114610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.114638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.115053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.369 [2024-07-12 11:07:43.115081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.369 qpair failed and we were unable to recover it. 00:29:26.369 [2024-07-12 11:07:43.115562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.115592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.116019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.116048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.116369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.116400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.116822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.116850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.117279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.117310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.117707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.117735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.118147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.118177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.118598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.118626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.119054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.119081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.119548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.119577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.120007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.120035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.120512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.120542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.120989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.121017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.121468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.121498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.121924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.121954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.122382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.122417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.122832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.122861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.123279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.123309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.123739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.123767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.124059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.124090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.124581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.124611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.125056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.125084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.125518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.125549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.125972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.126000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.126430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.126460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.126874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.126903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.127365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.127395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.127706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.127734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.128016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.128047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.128535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.128565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.128994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.129022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.129422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.129451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.129879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.129907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.130338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.130368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.130791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.130819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.131259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.131288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.131616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.131645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.132066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.132096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.132419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.132450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.135022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.135092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.135588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.135625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.136080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.136111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.136434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.136468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.136908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.136938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.137374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.137408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.137888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.137918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.138344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.138375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.138855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.138883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.139317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.139350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.139762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.139791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.140645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.140703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.141068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.141103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.141556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.141586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.142022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.142051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.142448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.142481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.142847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.142884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.145282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.145351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.145845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.145880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.146324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.146356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.146768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.146797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.147239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.147269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.147659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.147686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.148092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.148133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.148571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.148601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.148907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.148940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.150060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.150104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.150595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.370 [2024-07-12 11:07:43.150627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.370 qpair failed and we were unable to recover it. 00:29:26.370 [2024-07-12 11:07:43.150930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.150962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.151373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.151404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.151833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.151862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.152279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.152312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.152836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.152864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.153355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.153385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.153786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.153814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.154228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.154259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.154513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.154542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.155035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.155064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.155474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.155505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.155942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.155971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.156459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.156489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.156913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.156940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.157347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.157378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.157821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.157851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.158274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.158306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.158735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.158762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.159186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.159216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.159582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.159615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.160028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.160057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.160395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.160426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.160750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.160779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.161214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.161245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.161775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.161805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.162207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.162237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.162625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.162654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.162995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.163023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.163471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.163508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.163924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.163953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.164292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.164322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.164749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.164778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.165202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.165233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.165564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.165592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.166036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.166064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.166364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.166398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.166857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.166887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.167314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.167344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.167805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.167833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.168269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.168298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.168712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.168741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.169241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.169271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.169720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.169749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.170176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.170205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.170620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.170649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.171097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.171138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.171532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.171561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.171860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.171890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.172328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.172358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.172676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.172705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.173016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.173044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.173435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.173465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.173874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.173904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.174326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.174357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.174652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.174680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.175098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.175138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.175564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.175594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.176007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.176037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.176441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.176471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.176724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.176754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.177176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.177205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.177631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.177660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.177973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.178002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.178418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.178448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.178877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.178905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.179328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.179358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.179745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.179776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.180079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.180108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.180576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.180611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.181037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.181065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.181370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.181401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.181828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.181857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.182280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.182309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.182636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.182667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.183082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.183110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.183579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.183607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.183879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.183907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.371 qpair failed and we were unable to recover it. 00:29:26.371 [2024-07-12 11:07:43.184209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.371 [2024-07-12 11:07:43.184238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.184531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.184562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.185028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.185058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.185505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.185535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.185945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.185973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.186422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.186452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.186943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.186973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.187286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.187316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.187742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.187771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.188197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.188226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.188493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.188520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.188953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.188983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.189408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.189439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.189856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.189884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.190306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.190335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.190644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.190673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.190970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.191004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.191433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.191463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.191890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.191920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.192307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.192337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.192771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.192800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.193194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.193224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.193642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.193670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.194156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.194187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.194596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.194625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.195038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.195067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.195351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.195381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.195682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.195713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.196179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.196209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.196650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.196679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.196979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.197010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.197439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.197476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.197897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.197926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.198344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.198375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.198794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.198822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.199320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.199349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.199776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.199805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.200113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.200154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.200602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.200631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.201079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.201107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.201575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.201606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.202035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.202063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.202507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.202538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.202967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.202998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.203326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.203357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.203797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.203828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.204241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.204270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.204694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.204724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.204993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.205021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.205448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.205477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.205899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.205929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.206347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.206378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.206820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.206847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.207284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.207313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.207747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.207777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.208192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.208221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.208647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.208676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.209100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.209141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.209581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.209610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.210045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.210073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.210351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.210381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.210805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.210835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.211279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.211310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.211738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.211766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.212195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.212224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.212681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.212711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.213163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.213193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.213621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.213648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.214080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.214114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.214949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.215003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.215540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.372 [2024-07-12 11:07:43.215573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.372 qpair failed and we were unable to recover it. 00:29:26.372 [2024-07-12 11:07:43.216020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.216059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.217116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.217179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.217563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.217593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.218031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.218059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.218519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.218552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.218986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.219015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.219426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.219456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.219731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.219758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.220166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.220196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.220624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.220654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.221068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.221096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.221518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.221547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.221898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.221926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.222373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.222403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.222831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.222859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.223293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.223324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.223726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.223754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.224204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.224234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.224634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.224663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.225097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.225135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.225569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.225598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.226027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.226057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.226421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.226451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.226853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.226882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.227298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.227329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.227761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.227790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.228208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.228238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.228660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.228690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.229104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.229144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.229570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.229599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.230029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.230056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.230508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.230538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.230946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.230976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.231411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.231441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.231870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.231898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.232309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.232339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.232773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.232801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.233234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.233266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.233728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.233757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.234177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.234207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.234632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.234660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.235087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.235117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.235630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.235659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.236049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.236077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.236505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.236534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.236962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.236990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.237422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.237452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.237885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.237914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.238317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.238346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.238780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.238808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.239228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.239258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.239704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.239732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.240195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.240226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.240576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.240605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.241036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.241064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.241402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.241432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.241819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.241849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.242294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.242325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.242750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.242779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.243202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.243231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.243678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.243707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.244184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.244215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.244624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.244652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.245081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.245110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.245556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.245587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.246020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.246048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.246493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.246523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.246920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.246958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.247375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.247405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.247835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.247863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.248293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.248322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.373 [2024-07-12 11:07:43.248764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.373 [2024-07-12 11:07:43.248794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.373 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.249229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.249261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.249699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.249728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.250146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.250176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.250579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.250610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.251042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.251071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.251478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.251508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.251928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.251956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.252384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.252414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.252832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.252860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.253299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.253329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.253745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.253774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.254204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.254234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.254651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.254679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.255099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.255139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.255613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.255642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.256035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.256065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.256468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.256499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.256907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.256935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.257390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.257420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.257845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.257873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.258409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.258514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.259032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.259066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.259570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.259602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.260009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.260038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.260545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.260575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.260996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.261025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.261440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.261470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.261787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.261815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.262230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.262262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.262679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.262708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.263116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.263156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.263562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.263591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.264011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.264040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.264451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.264481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.264916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.264945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.265304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.265341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.265748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.265776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.266213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.266242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.266639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.266666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.266988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.267016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.267439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.267469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.267890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.267919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.268351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.268379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.268815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.268843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.269266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.269297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.269727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.269756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.270186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.270217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.270665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.270694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.271146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.271178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.271625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.271654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.272083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.272112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.272522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.272552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.273012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.273039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.273438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.273470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.273882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.273910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.274347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.274376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.274791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.274820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.275283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.275312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.275826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.275855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.276281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.276310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.276680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.276708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.277140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.277170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.277619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.277648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.277935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.277972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.278369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.278400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.278816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.278844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.279272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.279301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.279719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.279747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.280116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.280155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.280566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.280594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.281032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.281060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.281486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.281515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.281945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.281973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.282413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.282444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.282737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.282768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.283211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.283249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.283680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.374 [2024-07-12 11:07:43.283708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.374 qpair failed and we were unable to recover it. 00:29:26.374 [2024-07-12 11:07:43.284161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.284191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.284619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.284656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.285049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.285078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.285510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.285539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.285954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.285981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.286421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.286450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.286907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.286937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.287372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.287402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.287720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.287747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.288179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.288209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.288650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.288678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.289100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.289138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.289617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.289645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.290073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.290101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.290457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.290487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.290928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.290957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.291452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.291483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.291891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.291919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.292471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.292577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.293067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.293102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.293537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.293566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.293989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.294018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.294373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.294404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.294825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.294854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.295272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.295303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.295807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.295836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.296248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.296279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.296707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.296735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.297158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.297187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.297604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.297632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.298060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.298088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.298593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.298623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.299043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.299071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.299506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.299536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.299961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.299989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.300410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.300440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.300837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.300866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.301286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.301315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.301756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.301792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.302212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.302242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.302552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.302581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.303008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.303038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.303452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.303482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.303903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.303932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.304372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.304402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.304727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.304759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.305203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.305235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.305643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.305671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.306096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.306135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.306584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.306612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.307036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.307066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.307491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.307521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.307945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.307974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.308440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.308470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.308890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.308917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.309243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.309271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.309705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.309734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.310157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.310186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.310623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.310652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.311065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.311092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.311554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.311583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.311996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.312027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.312417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.312447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.312869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.312897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.313326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.313355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.375 qpair failed and we were unable to recover it. 00:29:26.375 [2024-07-12 11:07:43.313756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.375 [2024-07-12 11:07:43.313785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.314246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.314276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.314763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.314792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.315105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.315145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.315561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.315589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.316025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.316055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.316468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.316497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.316912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.316940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.317364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.317393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.317855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.317882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.318197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.318225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.318657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.318686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.319112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.319150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.319591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.319625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.320034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.320061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.320547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.320577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.320989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.321019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.321448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.321478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.321898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.321931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.322361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.322390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.322814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.322844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.323155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.323184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.323623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.323651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.324045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.324074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.324507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.324537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.324962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.324992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.325411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.325441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.325754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.325788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.326199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.326228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.326659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.326687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.327116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.327155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.327597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.327626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.328049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.328077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.328544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.328573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.329054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.329084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.329542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.329573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.329999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.330030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.330448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.330477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.330837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.330866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.331282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.331311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.331626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.331655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.332095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.332134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.332510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.332540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.332934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.332962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.333389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.333418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.333852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.333880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.334292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.334322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.334748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.334775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.335208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.335236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.335658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.335685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.336110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.336150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.336640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.336669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.337091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.337119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.337434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.337468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.337869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.337897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.338323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.338352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.338773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.338801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.339225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.339255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.339686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.339715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.340145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.340174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.340495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.340528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.340942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.340970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.341368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.341398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.341709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.341737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.376 [2024-07-12 11:07:43.342163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.376 [2024-07-12 11:07:43.342193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.376 qpair failed and we were unable to recover it. 00:29:26.650 [2024-07-12 11:07:43.342622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.650 [2024-07-12 11:07:43.342652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.650 qpair failed and we were unable to recover it. 00:29:26.650 [2024-07-12 11:07:43.343076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.650 [2024-07-12 11:07:43.343107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.650 qpair failed and we were unable to recover it. 00:29:26.650 [2024-07-12 11:07:43.343462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.650 [2024-07-12 11:07:43.343490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.650 qpair failed and we were unable to recover it. 00:29:26.650 [2024-07-12 11:07:43.345976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.650 [2024-07-12 11:07:43.346048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.650 qpair failed and we were unable to recover it. 00:29:26.650 [2024-07-12 11:07:43.346575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.650 [2024-07-12 11:07:43.346611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.650 qpair failed and we were unable to recover it. 00:29:26.650 [2024-07-12 11:07:43.346935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.650 [2024-07-12 11:07:43.346969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.650 qpair failed and we were unable to recover it. 00:29:26.650 [2024-07-12 11:07:43.347409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.650 [2024-07-12 11:07:43.347441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.650 qpair failed and we were unable to recover it. 00:29:26.650 [2024-07-12 11:07:43.347704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.650 [2024-07-12 11:07:43.347732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.650 qpair failed and we were unable to recover it. 00:29:26.650 [2024-07-12 11:07:43.348152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.650 [2024-07-12 11:07:43.348182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.650 qpair failed and we were unable to recover it. 00:29:26.650 [2024-07-12 11:07:43.348610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.650 [2024-07-12 11:07:43.348639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.650 qpair failed and we were unable to recover it. 00:29:26.650 [2024-07-12 11:07:43.349109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.650 [2024-07-12 11:07:43.349148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.650 qpair failed and we were unable to recover it. 00:29:26.650 [2024-07-12 11:07:43.349548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.650 [2024-07-12 11:07:43.349576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.650 qpair failed and we were unable to recover it. 00:29:26.650 [2024-07-12 11:07:43.349891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.650 [2024-07-12 11:07:43.349921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.650 qpair failed and we were unable to recover it. 00:29:26.650 [2024-07-12 11:07:43.350355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.650 [2024-07-12 11:07:43.350383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.650 qpair failed and we were unable to recover it. 00:29:26.650 [2024-07-12 11:07:43.350763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.650 [2024-07-12 11:07:43.350793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.650 qpair failed and we were unable to recover it. 00:29:26.650 [2024-07-12 11:07:43.351199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.650 [2024-07-12 11:07:43.351231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.650 qpair failed and we were unable to recover it. 00:29:26.650 [2024-07-12 11:07:43.351649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.650 [2024-07-12 11:07:43.351677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.650 qpair failed and we were unable to recover it. 00:29:26.650 [2024-07-12 11:07:43.352115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.650 [2024-07-12 11:07:43.352153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.650 qpair failed and we were unable to recover it. 00:29:26.650 [2024-07-12 11:07:43.352447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.650 [2024-07-12 11:07:43.352474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.650 qpair failed and we were unable to recover it. 00:29:26.650 [2024-07-12 11:07:43.352880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.650 [2024-07-12 11:07:43.352910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.650 qpair failed and we were unable to recover it. 00:29:26.650 [2024-07-12 11:07:43.353335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.650 [2024-07-12 11:07:43.353364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.650 qpair failed and we were unable to recover it. 00:29:26.650 [2024-07-12 11:07:43.353796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.650 [2024-07-12 11:07:43.353823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.650 qpair failed and we were unable to recover it. 00:29:26.650 [2024-07-12 11:07:43.354260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.650 [2024-07-12 11:07:43.354289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.650 qpair failed and we were unable to recover it. 00:29:26.650 [2024-07-12 11:07:43.354702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.650 [2024-07-12 11:07:43.354730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.650 qpair failed and we were unable to recover it. 00:29:26.650 [2024-07-12 11:07:43.355168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.650 [2024-07-12 11:07:43.355199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.650 qpair failed and we were unable to recover it. 00:29:26.650 [2024-07-12 11:07:43.355641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.650 [2024-07-12 11:07:43.355671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.650 qpair failed and we were unable to recover it. 00:29:26.650 [2024-07-12 11:07:43.356092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.650 [2024-07-12 11:07:43.356119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.650 qpair failed and we were unable to recover it. 00:29:26.650 [2024-07-12 11:07:43.356548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.650 [2024-07-12 11:07:43.356577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.650 qpair failed and we were unable to recover it. 00:29:26.650 [2024-07-12 11:07:43.357002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.650 [2024-07-12 11:07:43.357036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.650 qpair failed and we were unable to recover it. 00:29:26.650 [2024-07-12 11:07:43.357445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.650 [2024-07-12 11:07:43.357473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.650 qpair failed and we were unable to recover it. 00:29:26.650 [2024-07-12 11:07:43.357890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.357919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.358336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.358365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.358782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.358810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.359227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.359258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.359681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.359709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.360145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.360175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.360624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.360652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.361082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.361109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.361623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.361651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.362085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.362115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.362522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.362551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.362963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.362991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.363422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.363453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.363876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.363905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.364338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.364368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.364801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.364831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.365253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.365281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.365714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.365742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.366168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.366196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.366531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.366563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.367001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.367031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.367344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.367375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.367771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.367799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.368225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.368254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.368693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.368724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.369054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.369085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.369529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.369559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.369985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.370014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.370437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.370466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.370898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.370926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.371349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.371379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.371781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.371809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.372235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.372264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.372678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.372706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.373138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.373167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.373585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.373615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.373916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.373944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.374352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.374381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.374697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.374732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.375171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.651 [2024-07-12 11:07:43.375202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.651 qpair failed and we were unable to recover it. 00:29:26.651 [2024-07-12 11:07:43.375522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.375551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.375993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.376022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.376418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.376448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.376861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.376889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.377308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.377338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.377774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.377803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.378228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.378257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.378689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.378717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.379182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.379211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.379635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.379663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.379978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.380011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.380437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.380467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.380893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.380921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.381342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.381371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.381688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.381717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.382150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.382178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.382610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.382640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.382864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.382894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.383351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.383381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.383801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.383830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.384254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.384283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.384619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.384647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.385082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.385112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.385547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.385576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.386005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.386033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.386436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.386465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.386882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.386912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.387335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.387364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.387791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.387819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.388222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.388251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.388692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.388720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.389156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.389187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.389614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.389642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.390019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.390047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.390449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.390477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.390897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.390924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.391350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.391381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.391686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.391713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.392144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.392182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.392606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.392634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.652 [2024-07-12 11:07:43.393061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.652 [2024-07-12 11:07:43.393089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.652 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.393532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.393563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.393964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.393994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.394343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.394372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.394806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.394834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.395267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.395297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.395711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.395741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.396114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.396151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.396542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.396570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.396998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.397026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.397421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.397451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.397873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.397902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.398333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.398363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.398800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.398829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.399253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.399283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.399598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.399635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.400091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.400120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.400542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.400571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.400860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.400891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.401421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.401451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.401879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.401907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.402323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.402352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.402766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.402796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.403213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.403243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.403646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.403674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.404101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.404152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.404569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.404597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.405014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.405043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.405534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.405564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.405956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.405984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.406415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.406443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.406855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.406884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.407307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.407337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.407765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.407794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.408211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.408241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.408672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.408701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.409135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.409164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.409589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.409619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.410043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.410077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.410496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.410525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.653 [2024-07-12 11:07:43.410954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.653 [2024-07-12 11:07:43.410982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.653 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-12 11:07:43.411346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.654 [2024-07-12 11:07:43.411375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-12 11:07:43.411799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.654 [2024-07-12 11:07:43.411829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-12 11:07:43.412161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.654 [2024-07-12 11:07:43.412192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-12 11:07:43.412505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.654 [2024-07-12 11:07:43.412533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-12 11:07:43.412841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.654 [2024-07-12 11:07:43.412870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-12 11:07:43.413188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.654 [2024-07-12 11:07:43.413217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-12 11:07:43.413478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.654 [2024-07-12 11:07:43.413508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-12 11:07:43.413946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.654 [2024-07-12 11:07:43.413976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-12 11:07:43.414324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.654 [2024-07-12 11:07:43.414355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-12 11:07:43.414779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.654 [2024-07-12 11:07:43.414807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-12 11:07:43.415230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.654 [2024-07-12 11:07:43.415260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-12 11:07:43.415716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.654 [2024-07-12 11:07:43.415744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-12 11:07:43.416217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.654 [2024-07-12 11:07:43.416247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-12 11:07:43.416665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.654 [2024-07-12 11:07:43.416694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-12 11:07:43.417144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.654 [2024-07-12 11:07:43.417173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-12 11:07:43.417520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.654 [2024-07-12 11:07:43.417550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-12 11:07:43.417854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.654 [2024-07-12 11:07:43.417882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-12 11:07:43.418308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.654 [2024-07-12 11:07:43.418337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-12 11:07:43.418752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.654 [2024-07-12 11:07:43.418782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-12 11:07:43.419093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.654 [2024-07-12 11:07:43.419135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-12 11:07:43.419556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.654 [2024-07-12 11:07:43.419585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-12 11:07:43.420014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.654 [2024-07-12 11:07:43.420043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-12 11:07:43.420346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.654 [2024-07-12 11:07:43.420375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-12 11:07:43.420804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.654 [2024-07-12 11:07:43.420836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-12 11:07:43.421272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.654 [2024-07-12 11:07:43.421303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-12 11:07:43.421751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.654 [2024-07-12 11:07:43.421779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-12 11:07:43.422102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.654 [2024-07-12 11:07:43.422142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-12 11:07:43.422620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.654 [2024-07-12 11:07:43.422647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-12 11:07:43.423057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.654 [2024-07-12 11:07:43.423086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-12 11:07:43.423561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.654 [2024-07-12 11:07:43.423591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-12 11:07:43.424025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.654 [2024-07-12 11:07:43.424053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-12 11:07:43.424473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.654 [2024-07-12 11:07:43.424503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-12 11:07:43.424813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.654 [2024-07-12 11:07:43.424840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.425152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.425182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.425457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.425487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.425912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.425940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.426358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.426387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.426815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.426850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.427288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.427317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.427620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.427648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.428082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.428111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.428553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.428581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.429008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.429037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.429539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.429569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.429758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.429788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.430244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.430275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.430717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.430744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.431102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.431161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.431582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.431611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.432051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.432081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.432511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.432541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.432980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.433010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.433467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.433497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.433929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.433958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.434375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.434405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.434827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.434857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.435289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.435318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.435740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.435768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.436209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.436239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.436662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.436690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.436990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.437023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.437466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.437496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.437927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.437954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.438232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.438260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.438607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.438635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.439063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.439091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.439482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.439511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.439853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.439885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.440281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.440310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.440730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.440758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.441182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.655 [2024-07-12 11:07:43.441214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-12 11:07:43.441647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.441676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.442087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.442115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.442608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.442636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.443065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.443094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.443550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.443580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.443901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.443930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.444364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.444400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.444810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.444838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.445271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.445300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.445747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.445775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.446189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.446219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.446511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.446539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.446953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.446981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.447294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.447330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.447733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.447761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.448078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.448106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.448553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.448583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.449012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.449040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.449379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.449409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.449727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.449755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.450176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.450205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.450625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.450654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.451082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.451111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.451559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.451587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.452014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.452043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.452492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.452521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.452836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.452865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.453329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.453359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.453823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.453851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.454052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.454079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.454561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.454589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.455010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.455038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.455540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.455570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.455884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.455921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.456402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.456435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.456844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.456872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.457351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.457381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.457820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.457850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.458145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.458178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.656 [2024-07-12 11:07:43.458625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-12 11:07:43.458653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.656 qpair failed and we were unable to recover it. 00:29:26.657 [2024-07-12 11:07:43.459077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-12 11:07:43.459105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.657 qpair failed and we were unable to recover it. 00:29:26.657 [2024-07-12 11:07:43.459617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-12 11:07:43.459646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.657 qpair failed and we were unable to recover it. 00:29:26.657 [2024-07-12 11:07:43.460083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-12 11:07:43.460110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.657 qpair failed and we were unable to recover it. 00:29:26.657 [2024-07-12 11:07:43.460555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-12 11:07:43.460584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.657 qpair failed and we were unable to recover it. 00:29:26.657 [2024-07-12 11:07:43.461011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-12 11:07:43.461040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.657 qpair failed and we were unable to recover it. 00:29:26.657 [2024-07-12 11:07:43.461464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-12 11:07:43.461494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.657 qpair failed and we were unable to recover it. 00:29:26.657 [2024-07-12 11:07:43.461914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-12 11:07:43.461942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.657 qpair failed and we were unable to recover it. 00:29:26.657 [2024-07-12 11:07:43.462470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-12 11:07:43.462574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.657 qpair failed and we were unable to recover it. 00:29:26.657 [2024-07-12 11:07:43.462957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-12 11:07:43.462998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.657 qpair failed and we were unable to recover it. 00:29:26.657 [2024-07-12 11:07:43.463331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-12 11:07:43.463362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.657 qpair failed and we were unable to recover it. 00:29:26.657 [2024-07-12 11:07:43.463786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-12 11:07:43.463814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.657 qpair failed and we were unable to recover it. 00:29:26.657 [2024-07-12 11:07:43.464229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-12 11:07:43.464258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.657 qpair failed and we were unable to recover it. 00:29:26.657 [2024-07-12 11:07:43.464690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-12 11:07:43.464719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.657 qpair failed and we were unable to recover it. 00:29:26.657 [2024-07-12 11:07:43.465153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-12 11:07:43.465183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.657 qpair failed and we were unable to recover it. 00:29:26.657 [2024-07-12 11:07:43.465613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-12 11:07:43.465641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.657 qpair failed and we were unable to recover it. 00:29:26.657 [2024-07-12 11:07:43.466147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-12 11:07:43.466177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.657 qpair failed and we were unable to recover it. 00:29:26.657 [2024-07-12 11:07:43.466605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-12 11:07:43.466634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.657 qpair failed and we were unable to recover it. 00:29:26.657 [2024-07-12 11:07:43.467071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-12 11:07:43.467100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.657 qpair failed and we were unable to recover it. 00:29:26.657 [2024-07-12 11:07:43.467527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-12 11:07:43.467557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.657 qpair failed and we were unable to recover it. 00:29:26.657 [2024-07-12 11:07:43.467977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-12 11:07:43.468006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.657 qpair failed and we were unable to recover it. 00:29:26.657 [2024-07-12 11:07:43.468450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-12 11:07:43.468481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.657 qpair failed and we were unable to recover it. 00:29:26.657 [2024-07-12 11:07:43.468906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-12 11:07:43.468935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.657 qpair failed and we were unable to recover it. 00:29:26.657 [2024-07-12 11:07:43.469350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-12 11:07:43.469379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.657 qpair failed and we were unable to recover it. 00:29:26.657 [2024-07-12 11:07:43.469700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-12 11:07:43.469732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.657 qpair failed and we were unable to recover it. 00:29:26.657 [2024-07-12 11:07:43.470153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-12 11:07:43.470184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.657 qpair failed and we were unable to recover it. 00:29:26.657 [2024-07-12 11:07:43.470631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-12 11:07:43.470659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.657 qpair failed and we were unable to recover it. 00:29:26.657 [2024-07-12 11:07:43.471088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-12 11:07:43.471117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.657 qpair failed and we were unable to recover it. 00:29:26.657 [2024-07-12 11:07:43.471623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-12 11:07:43.471652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.657 qpair failed and we were unable to recover it. 00:29:26.657 [2024-07-12 11:07:43.472051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-12 11:07:43.472080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.657 qpair failed and we were unable to recover it. 00:29:26.657 [2024-07-12 11:07:43.472506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-12 11:07:43.472535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.657 qpair failed and we were unable to recover it. 00:29:26.657 [2024-07-12 11:07:43.472884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-12 11:07:43.472913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.657 qpair failed and we were unable to recover it. 00:29:26.657 [2024-07-12 11:07:43.473347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-12 11:07:43.473376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.657 qpair failed and we were unable to recover it. 00:29:26.657 [2024-07-12 11:07:43.473737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-12 11:07:43.473765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.657 qpair failed and we were unable to recover it. 00:29:26.657 [2024-07-12 11:07:43.474193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-12 11:07:43.474230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.657 qpair failed and we were unable to recover it. 00:29:26.657 [2024-07-12 11:07:43.474637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-12 11:07:43.474665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.657 qpair failed and we were unable to recover it. 00:29:26.657 [2024-07-12 11:07:43.475079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.475107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.475515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.475544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.475968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.475997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.476420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.476453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.476866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.476897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.477320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.477353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.477775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.477807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.478221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.478251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.478670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.478699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.479140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.479170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.479594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.479623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.480040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.480068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.480541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.480571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.480994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.481024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.481425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.481454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.481772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.481801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.482222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.482252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.482679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.482707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.483141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.483170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.483596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.483624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.483943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.483972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.484417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.484446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.484910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.484938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.485251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.485282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.485745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.485773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.486210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.486242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.486666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.486695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.487118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.487180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.487604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.487631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.487863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.487895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.488227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.488257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.488666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.488694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.489190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.489221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.489524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.489554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.489973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.490003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.490416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.490446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.490748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.490779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.491214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.491242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.658 [2024-07-12 11:07:43.491686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-12 11:07:43.491721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.658 qpair failed and we were unable to recover it. 00:29:26.659 [2024-07-12 11:07:43.492137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.659 [2024-07-12 11:07:43.492166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.659 qpair failed and we were unable to recover it. 00:29:26.659 [2024-07-12 11:07:43.492568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.659 [2024-07-12 11:07:43.492598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.659 qpair failed and we were unable to recover it. 00:29:26.659 [2024-07-12 11:07:43.493066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.659 [2024-07-12 11:07:43.493094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.659 qpair failed and we were unable to recover it. 00:29:26.659 [2024-07-12 11:07:43.493429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.659 [2024-07-12 11:07:43.493461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.659 qpair failed and we were unable to recover it. 00:29:26.659 [2024-07-12 11:07:43.493878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.659 [2024-07-12 11:07:43.493906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.659 qpair failed and we were unable to recover it. 00:29:26.659 [2024-07-12 11:07:43.494310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.659 [2024-07-12 11:07:43.494340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.659 qpair failed and we were unable to recover it. 00:29:26.659 [2024-07-12 11:07:43.494778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.659 [2024-07-12 11:07:43.494807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.659 qpair failed and we were unable to recover it. 00:29:26.659 [2024-07-12 11:07:43.495238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.659 [2024-07-12 11:07:43.495267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.659 qpair failed and we were unable to recover it. 00:29:26.659 [2024-07-12 11:07:43.495682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.659 [2024-07-12 11:07:43.495710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.659 qpair failed and we were unable to recover it. 00:29:26.659 [2024-07-12 11:07:43.496136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.659 [2024-07-12 11:07:43.496165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.659 qpair failed and we were unable to recover it. 00:29:26.659 [2024-07-12 11:07:43.496471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.659 [2024-07-12 11:07:43.496502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.659 qpair failed and we were unable to recover it. 00:29:26.659 [2024-07-12 11:07:43.496814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.659 [2024-07-12 11:07:43.496845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.659 qpair failed and we were unable to recover it. 00:29:26.659 [2024-07-12 11:07:43.497271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.659 [2024-07-12 11:07:43.497300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.659 qpair failed and we were unable to recover it. 00:29:26.659 [2024-07-12 11:07:43.497753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.659 [2024-07-12 11:07:43.497781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.659 qpair failed and we were unable to recover it. 00:29:26.659 [2024-07-12 11:07:43.498197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.659 [2024-07-12 11:07:43.498227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.659 qpair failed and we were unable to recover it. 00:29:26.659 [2024-07-12 11:07:43.498665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.659 [2024-07-12 11:07:43.498694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.659 qpair failed and we were unable to recover it. 00:29:26.659 [2024-07-12 11:07:43.499017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.659 [2024-07-12 11:07:43.499047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.659 qpair failed and we were unable to recover it. 00:29:26.659 [2024-07-12 11:07:43.499492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.659 [2024-07-12 11:07:43.499520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.659 qpair failed and we were unable to recover it. 00:29:26.659 [2024-07-12 11:07:43.499948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.659 [2024-07-12 11:07:43.499976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.659 qpair failed and we were unable to recover it. 00:29:26.659 [2024-07-12 11:07:43.500420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.659 [2024-07-12 11:07:43.500448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.659 qpair failed and we were unable to recover it. 00:29:26.659 [2024-07-12 11:07:43.500880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.659 [2024-07-12 11:07:43.500907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.659 qpair failed and we were unable to recover it. 00:29:26.659 [2024-07-12 11:07:43.501336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.659 [2024-07-12 11:07:43.501366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.659 qpair failed and we were unable to recover it. 00:29:26.659 [2024-07-12 11:07:43.501678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.659 [2024-07-12 11:07:43.501707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.659 qpair failed and we were unable to recover it. 00:29:26.659 [2024-07-12 11:07:43.502013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.659 [2024-07-12 11:07:43.502043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.659 qpair failed and we were unable to recover it. 00:29:26.659 [2024-07-12 11:07:43.502442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.659 [2024-07-12 11:07:43.502471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.659 qpair failed and we were unable to recover it. 00:29:26.659 [2024-07-12 11:07:43.502874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.659 [2024-07-12 11:07:43.502903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.659 qpair failed and we were unable to recover it. 00:29:26.659 [2024-07-12 11:07:43.503326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.659 [2024-07-12 11:07:43.503356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.659 qpair failed and we were unable to recover it. 00:29:26.659 [2024-07-12 11:07:43.503756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.659 [2024-07-12 11:07:43.503786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.659 qpair failed and we were unable to recover it. 00:29:26.659 [2024-07-12 11:07:43.504217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.659 [2024-07-12 11:07:43.504247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.659 qpair failed and we were unable to recover it. 00:29:26.659 [2024-07-12 11:07:43.504693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.659 [2024-07-12 11:07:43.504721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.659 qpair failed and we were unable to recover it. 00:29:26.659 [2024-07-12 11:07:43.505119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.659 [2024-07-12 11:07:43.505160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.659 qpair failed and we were unable to recover it. 00:29:26.659 [2024-07-12 11:07:43.505632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.659 [2024-07-12 11:07:43.505660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.659 qpair failed and we were unable to recover it. 00:29:26.659 [2024-07-12 11:07:43.505947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.505977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.506417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.506447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.506872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.506901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.507330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.507359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.507790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.507818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.508141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.508172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.508597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.508625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.509053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.509088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.509573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.509604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.510064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.510092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.510540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.510570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.510997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.511027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.511302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.511332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.511759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.511789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.512210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.512240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.512679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.512708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.513020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.513052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.513453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.513483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.513914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.513942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.514372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.514401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.514820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.514848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.515294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.515324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.515737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.515765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.516189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.516219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.516645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.516674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.516973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.517001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.517394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.517426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.517858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.517886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.518323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.518352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.518786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.518815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.519140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.519171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.519596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.519624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.520052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.520081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.520490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.520519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.520946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.520975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.521368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.521399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.521838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.521866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.522302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.522332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.660 [2024-07-12 11:07:43.522686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.660 [2024-07-12 11:07:43.522717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.660 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.523152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.523181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.523605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.523633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.524060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.524088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.524572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.524602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.525036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.525065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.525494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.525524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.525944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.525971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.526385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.526415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.526846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.526881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.527179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.527212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.527655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.527683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.528094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.528136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.528490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.528519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.528942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.528971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.529403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.529434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.529843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.529871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.530287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.530318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.530653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.530681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.531095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.531140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.531539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.531569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.531988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.532017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.532448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.532478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.532889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.532918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.533226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.533256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.533762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.533792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.534248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.534277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.534725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.534754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.535176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.535206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.535642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.535670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.536094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.536134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.536559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.536588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.536906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.536934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.537347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.537378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.537775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.537804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.538225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.538254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.538680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.538710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.539160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.539190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.539529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.539558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.661 [2024-07-12 11:07:43.539974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.661 [2024-07-12 11:07:43.540002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.661 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.540421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.540451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.540862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.540891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.541318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.541348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.541777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.541806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.542229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.542259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.542680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.542708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.543141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.543172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.543587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.543617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.543921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.543954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.544386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.544422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.544835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.544866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.545293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.545325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.545734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.545763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.546180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.546210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.546658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.546687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.547112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.547163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.547460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.547488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.547908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.547938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.548366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.548397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.548823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.548852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.549288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.549319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.549745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.549775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.550195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.550225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.550757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.550787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.551254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.551285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.551703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.551732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.552163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.552194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.552613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.552641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.553067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.553095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.553534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.553563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.553999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.554027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.554442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.554473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.554898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.554927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.555289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.555319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.555743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.555772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.556202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.556231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.556553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.556583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.556984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.662 [2024-07-12 11:07:43.557012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.662 qpair failed and we were unable to recover it. 00:29:26.662 [2024-07-12 11:07:43.557442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.663 [2024-07-12 11:07:43.557472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.663 qpair failed and we were unable to recover it. 00:29:26.663 [2024-07-12 11:07:43.557797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.663 [2024-07-12 11:07:43.557827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.663 qpair failed and we were unable to recover it. 00:29:26.663 [2024-07-12 11:07:43.558179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.663 [2024-07-12 11:07:43.558209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.663 qpair failed and we were unable to recover it. 00:29:26.663 [2024-07-12 11:07:43.558606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.663 [2024-07-12 11:07:43.558633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.663 qpair failed and we were unable to recover it. 00:29:26.663 [2024-07-12 11:07:43.559037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.663 [2024-07-12 11:07:43.559065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.663 qpair failed and we were unable to recover it. 00:29:26.663 [2024-07-12 11:07:43.559496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.663 [2024-07-12 11:07:43.559527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.663 qpair failed and we were unable to recover it. 00:29:26.663 [2024-07-12 11:07:43.559963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.663 [2024-07-12 11:07:43.559992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.663 qpair failed and we were unable to recover it. 00:29:26.663 [2024-07-12 11:07:43.560341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.663 [2024-07-12 11:07:43.560369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.663 qpair failed and we were unable to recover it. 00:29:26.663 [2024-07-12 11:07:43.560691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.663 [2024-07-12 11:07:43.560719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.663 qpair failed and we were unable to recover it. 00:29:26.663 [2024-07-12 11:07:43.561145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.663 [2024-07-12 11:07:43.561175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.663 qpair failed and we were unable to recover it. 00:29:26.663 [2024-07-12 11:07:43.561595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.663 [2024-07-12 11:07:43.561625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.663 qpair failed and we were unable to recover it. 00:29:26.663 [2024-07-12 11:07:43.562043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.663 [2024-07-12 11:07:43.562078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.663 qpair failed and we were unable to recover it. 00:29:26.663 [2024-07-12 11:07:43.562515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.663 [2024-07-12 11:07:43.562545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.663 qpair failed and we were unable to recover it. 00:29:26.663 [2024-07-12 11:07:43.562968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.663 [2024-07-12 11:07:43.562997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.663 qpair failed and we were unable to recover it. 00:29:26.663 [2024-07-12 11:07:43.563429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.663 [2024-07-12 11:07:43.563459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.663 qpair failed and we were unable to recover it. 00:29:26.663 [2024-07-12 11:07:43.563903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.663 [2024-07-12 11:07:43.563932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.663 qpair failed and we were unable to recover it. 00:29:26.663 [2024-07-12 11:07:43.564358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.663 [2024-07-12 11:07:43.564387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.663 qpair failed and we were unable to recover it. 00:29:26.663 [2024-07-12 11:07:43.564811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.663 [2024-07-12 11:07:43.564840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.663 qpair failed and we were unable to recover it. 00:29:26.663 [2024-07-12 11:07:43.565268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.663 [2024-07-12 11:07:43.565297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.663 qpair failed and we were unable to recover it. 00:29:26.663 [2024-07-12 11:07:43.565745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.663 [2024-07-12 11:07:43.565773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.663 qpair failed and we were unable to recover it. 00:29:26.663 [2024-07-12 11:07:43.566205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.663 [2024-07-12 11:07:43.566234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.663 qpair failed and we were unable to recover it. 00:29:26.663 [2024-07-12 11:07:43.566635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.663 [2024-07-12 11:07:43.566666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.663 qpair failed and we were unable to recover it. 00:29:26.663 [2024-07-12 11:07:43.567085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.663 [2024-07-12 11:07:43.567114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.663 qpair failed and we were unable to recover it. 00:29:26.663 [2024-07-12 11:07:43.567530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.663 [2024-07-12 11:07:43.567559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.663 qpair failed and we were unable to recover it. 00:29:26.663 [2024-07-12 11:07:43.567979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.663 [2024-07-12 11:07:43.568009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.663 qpair failed and we were unable to recover it. 00:29:26.663 [2024-07-12 11:07:43.568424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.663 [2024-07-12 11:07:43.568455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.663 qpair failed and we were unable to recover it. 00:29:26.663 [2024-07-12 11:07:43.568872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.663 [2024-07-12 11:07:43.568901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.663 qpair failed and we were unable to recover it. 00:29:26.663 [2024-07-12 11:07:43.569314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.663 [2024-07-12 11:07:43.569344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.663 qpair failed and we were unable to recover it. 00:29:26.663 [2024-07-12 11:07:43.569779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.663 [2024-07-12 11:07:43.569808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.663 qpair failed and we were unable to recover it. 00:29:26.663 [2024-07-12 11:07:43.570223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.663 [2024-07-12 11:07:43.570252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.663 qpair failed and we were unable to recover it. 00:29:26.663 [2024-07-12 11:07:43.570750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.663 [2024-07-12 11:07:43.570778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.663 qpair failed and we were unable to recover it. 00:29:26.663 [2024-07-12 11:07:43.571216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.663 [2024-07-12 11:07:43.571245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.663 qpair failed and we were unable to recover it. 00:29:26.663 [2024-07-12 11:07:43.571542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.571573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.572025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.572054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.572516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.572547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.573025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.573055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.573456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.573484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.573911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.573939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.574287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.574316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.574735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.574763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.575192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.575221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.575660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.575689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.576167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.576197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.576595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.576623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.577023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.577051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.577481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.577511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.577944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.577974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.578426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.578455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.578886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.578915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.579337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.579366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.579805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.579832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.580195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.580231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.580683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.580713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.581143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.581172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.581454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.581481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.581917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.581945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.582369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.582399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.582823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.582851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.583170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.583206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.583493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.583525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.583948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.583975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.584419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.584450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.584913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.584942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.585369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.585400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.585826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.585855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.586274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.586303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.586604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.586631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.586934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.586962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.587385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.587414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.587840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.587869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.588297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.588327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-07-12 11:07:43.588766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.664 [2024-07-12 11:07:43.588795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.589226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.589256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.589678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.589711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.590160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.590191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.590624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.590653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.591065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.591093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.591578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.591608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.591921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.591951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.592250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.592283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.592717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.592745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.593171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.593200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.593610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.593638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.594055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.594082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.594507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.594538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.594828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.594858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.595310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.595340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.595651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.595680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.596112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.596164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.596503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.596536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.596970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.597001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.597505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.597542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.597948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.597976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.598423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.598452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.598753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.598782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.599210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.599241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.599652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.599680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.600101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.600139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.600564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.600591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.600916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.600943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.601377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.601408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.601830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.601859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.602291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.602321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.602753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.602781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.603209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.603238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.603682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.603710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.604073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.604102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.604446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.604478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.604910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.604939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.605358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.605389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.605697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.605727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.606163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.665 [2024-07-12 11:07:43.606194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-07-12 11:07:43.606620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.666 [2024-07-12 11:07:43.606647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.666 qpair failed and we were unable to recover it. 00:29:26.666 [2024-07-12 11:07:43.607056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.666 [2024-07-12 11:07:43.607085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.666 qpair failed and we were unable to recover it. 00:29:26.666 [2024-07-12 11:07:43.607511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.666 [2024-07-12 11:07:43.607540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.666 qpair failed and we were unable to recover it. 00:29:26.666 [2024-07-12 11:07:43.607968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.666 [2024-07-12 11:07:43.607996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.666 qpair failed and we were unable to recover it. 00:29:26.666 [2024-07-12 11:07:43.608428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.666 [2024-07-12 11:07:43.608458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.666 qpair failed and we were unable to recover it. 00:29:26.666 [2024-07-12 11:07:43.608883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.666 [2024-07-12 11:07:43.608911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.666 qpair failed and we were unable to recover it. 00:29:26.666 [2024-07-12 11:07:43.609329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.666 [2024-07-12 11:07:43.609361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.666 qpair failed and we were unable to recover it. 00:29:26.666 [2024-07-12 11:07:43.609783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.666 [2024-07-12 11:07:43.609812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.666 qpair failed and we were unable to recover it. 00:29:26.666 [2024-07-12 11:07:43.610235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.666 [2024-07-12 11:07:43.610265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.666 qpair failed and we were unable to recover it. 00:29:26.666 [2024-07-12 11:07:43.610716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.666 [2024-07-12 11:07:43.610746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.666 qpair failed and we were unable to recover it. 00:29:26.666 [2024-07-12 11:07:43.611180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.666 [2024-07-12 11:07:43.611209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.666 qpair failed and we were unable to recover it. 00:29:26.666 [2024-07-12 11:07:43.611661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.666 [2024-07-12 11:07:43.611689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.666 qpair failed and we were unable to recover it. 00:29:26.666 [2024-07-12 11:07:43.612160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.666 [2024-07-12 11:07:43.612189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.666 qpair failed and we were unable to recover it. 00:29:26.666 [2024-07-12 11:07:43.612483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.666 [2024-07-12 11:07:43.612510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.666 qpair failed and we were unable to recover it. 00:29:26.666 [2024-07-12 11:07:43.612817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.666 [2024-07-12 11:07:43.612849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.666 qpair failed and we were unable to recover it. 00:29:26.666 [2024-07-12 11:07:43.613328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.666 [2024-07-12 11:07:43.613359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.666 qpair failed and we were unable to recover it. 00:29:26.666 [2024-07-12 11:07:43.613841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.666 [2024-07-12 11:07:43.613869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.666 qpair failed and we were unable to recover it. 00:29:26.666 [2024-07-12 11:07:43.614294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.666 [2024-07-12 11:07:43.614323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.666 qpair failed and we were unable to recover it. 00:29:26.666 [2024-07-12 11:07:43.614746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.666 [2024-07-12 11:07:43.614775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.666 qpair failed and we were unable to recover it. 00:29:26.666 [2024-07-12 11:07:43.615247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.666 [2024-07-12 11:07:43.615283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.666 qpair failed and we were unable to recover it. 00:29:26.666 [2024-07-12 11:07:43.615676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.666 [2024-07-12 11:07:43.615705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.666 qpair failed and we were unable to recover it. 00:29:26.666 [2024-07-12 11:07:43.616164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.666 [2024-07-12 11:07:43.616194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.666 qpair failed and we were unable to recover it. 00:29:26.666 [2024-07-12 11:07:43.616591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.666 [2024-07-12 11:07:43.616619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.666 qpair failed and we were unable to recover it. 00:29:26.666 [2024-07-12 11:07:43.616920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.666 [2024-07-12 11:07:43.616947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.666 qpair failed and we were unable to recover it. 00:29:26.666 [2024-07-12 11:07:43.617437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.666 [2024-07-12 11:07:43.617467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.666 qpair failed and we were unable to recover it. 00:29:26.666 [2024-07-12 11:07:43.617895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.666 [2024-07-12 11:07:43.617923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.666 qpair failed and we were unable to recover it. 00:29:26.666 [2024-07-12 11:07:43.618330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.666 [2024-07-12 11:07:43.618358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.666 qpair failed and we were unable to recover it. 00:29:26.666 [2024-07-12 11:07:43.618797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.666 [2024-07-12 11:07:43.618825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.666 qpair failed and we were unable to recover it. 00:29:26.666 [2024-07-12 11:07:43.619144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.666 [2024-07-12 11:07:43.619174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.666 qpair failed and we were unable to recover it. 00:29:26.666 [2024-07-12 11:07:43.619479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.666 [2024-07-12 11:07:43.619507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.666 qpair failed and we were unable to recover it. 00:29:26.666 [2024-07-12 11:07:43.619935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.666 [2024-07-12 11:07:43.619962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.666 qpair failed and we were unable to recover it. 00:29:26.666 [2024-07-12 11:07:43.620363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.666 [2024-07-12 11:07:43.620392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.666 qpair failed and we were unable to recover it. 00:29:26.666 [2024-07-12 11:07:43.620814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.666 [2024-07-12 11:07:43.620842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.666 qpair failed and we were unable to recover it. 00:29:26.974 [2024-07-12 11:07:43.621259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.974 [2024-07-12 11:07:43.621292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.974 qpair failed and we were unable to recover it. 00:29:26.974 [2024-07-12 11:07:43.621730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.974 [2024-07-12 11:07:43.621765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.974 qpair failed and we were unable to recover it. 00:29:26.974 [2024-07-12 11:07:43.622230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.974 [2024-07-12 11:07:43.622270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.974 qpair failed and we were unable to recover it. 00:29:26.974 [2024-07-12 11:07:43.622740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.974 [2024-07-12 11:07:43.622781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.974 qpair failed and we were unable to recover it. 00:29:26.974 [2024-07-12 11:07:43.623218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.974 [2024-07-12 11:07:43.623257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.974 qpair failed and we were unable to recover it. 00:29:26.974 [2024-07-12 11:07:43.623625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.974 [2024-07-12 11:07:43.623662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.974 qpair failed and we were unable to recover it. 00:29:26.974 [2024-07-12 11:07:43.623954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.974 [2024-07-12 11:07:43.624011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.974 qpair failed and we were unable to recover it. 00:29:26.974 [2024-07-12 11:07:43.624461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.974 [2024-07-12 11:07:43.624492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.974 qpair failed and we were unable to recover it. 00:29:26.974 [2024-07-12 11:07:43.624902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.974 [2024-07-12 11:07:43.624929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.974 qpair failed and we were unable to recover it. 00:29:26.974 [2024-07-12 11:07:43.625348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.974 [2024-07-12 11:07:43.625379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.974 qpair failed and we were unable to recover it. 00:29:26.974 [2024-07-12 11:07:43.625804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.974 [2024-07-12 11:07:43.625831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.974 qpair failed and we were unable to recover it. 00:29:26.974 [2024-07-12 11:07:43.626264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.974 [2024-07-12 11:07:43.626294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.974 qpair failed and we were unable to recover it. 00:29:26.974 [2024-07-12 11:07:43.626720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.974 [2024-07-12 11:07:43.626749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.974 qpair failed and we were unable to recover it. 00:29:26.974 [2024-07-12 11:07:43.627161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.974 [2024-07-12 11:07:43.627191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.974 qpair failed and we were unable to recover it. 00:29:26.974 [2024-07-12 11:07:43.627610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.974 [2024-07-12 11:07:43.627639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.974 qpair failed and we were unable to recover it. 00:29:26.974 [2024-07-12 11:07:43.628080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.974 [2024-07-12 11:07:43.628108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.974 qpair failed and we were unable to recover it. 00:29:26.974 [2024-07-12 11:07:43.628558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.974 [2024-07-12 11:07:43.628595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.974 qpair failed and we were unable to recover it. 00:29:26.974 [2024-07-12 11:07:43.628993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.629021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.629479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.629509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.629937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.629965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.630409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.630437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.630866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.630893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.631298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.631326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.631828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.631857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.632157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.632186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.632623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.632651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.633055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.633089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.633553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.633582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.633892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.633921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.634257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.634290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.634691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.634719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.635134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.635163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.635598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.635626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.636049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.636076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.636536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.636564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.636875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.636901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.637267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.637296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.637772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.637799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.638251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.638280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.638700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.638727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.639157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.639187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.639623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.639651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.640085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.640114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.640600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.640628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.641042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.641071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.641342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.641370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.641797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.641824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.642163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.642194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.642624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.642653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.643099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.643145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.643583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.643610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.644030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.644058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.644363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.644392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.644818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.644847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.645264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.645293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.645720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.645747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.646165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.975 [2024-07-12 11:07:43.646193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.975 qpair failed and we were unable to recover it. 00:29:26.975 [2024-07-12 11:07:43.646621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.646651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.647044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.647072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.647556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.647585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.647997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.648026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.648397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.648426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.648853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.648881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.649318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.649348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.649778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.649807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.650227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.650256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.650685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.650718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.651142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.651172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.651613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.651641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.652067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.652096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.652573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.652603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.653026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.653055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.653506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.653536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.653930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.653957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.654477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.654578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.654987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.655026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.655471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.655503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.655903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.655932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.656248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.656278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.656685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.656714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.657152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.657183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.657616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.657645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.658059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.658089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.658552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.658582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.658870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.658898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.659317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.659347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.659773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.659803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.660274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.660304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.660746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.660775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.661199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.661228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.661536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.661568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.661823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.661853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.662242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.662270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.662748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.662778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.663249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.663280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.663712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.663740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.664055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.664083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.976 [2024-07-12 11:07:43.664480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.976 [2024-07-12 11:07:43.664510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.976 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.664845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.664873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.665296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.665327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.665784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.665814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.666244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.666276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.666581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.666611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.667047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.667076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.667581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.667611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.668077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.668106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.668454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.668490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.668912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.668941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.669377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.669408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.669831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.669859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.670311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.670340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.670764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.670791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.671089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.671120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.671673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.671703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.672090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.672118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.672668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.672697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.673113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.673156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.673603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.673632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.673955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.673985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.674434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.674465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.674905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.674935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.675353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.675383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.675803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.675831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.676145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.676174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.676643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.676672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.677104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.677142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.677457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.677486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.677919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.677949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.678482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.678585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.679097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.679151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.679613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.679642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.680112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.680154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.680618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.680646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.681132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.681163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.681663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.681692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.682120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.682174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.682660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.682689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.977 qpair failed and we were unable to recover it. 00:29:26.977 [2024-07-12 11:07:43.682984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.977 [2024-07-12 11:07:43.683013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.978 qpair failed and we were unable to recover it. 00:29:26.978 [2024-07-12 11:07:43.683515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.978 [2024-07-12 11:07:43.683616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.978 qpair failed and we were unable to recover it. 00:29:26.978 [2024-07-12 11:07:43.684105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.978 [2024-07-12 11:07:43.684162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.978 qpair failed and we were unable to recover it. 00:29:26.978 [2024-07-12 11:07:43.684603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.978 [2024-07-12 11:07:43.684633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.978 qpair failed and we were unable to recover it. 00:29:26.978 [2024-07-12 11:07:43.685057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.978 [2024-07-12 11:07:43.685085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.978 qpair failed and we were unable to recover it. 00:29:26.978 [2024-07-12 11:07:43.685395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.978 [2024-07-12 11:07:43.685425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.978 qpair failed and we were unable to recover it. 00:29:26.978 [2024-07-12 11:07:43.685827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.978 [2024-07-12 11:07:43.685856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.978 qpair failed and we were unable to recover it. 00:29:26.978 [2024-07-12 11:07:43.686377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.978 [2024-07-12 11:07:43.686480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.978 qpair failed and we were unable to recover it. 00:29:26.978 [2024-07-12 11:07:43.686972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.978 [2024-07-12 11:07:43.687007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.978 qpair failed and we were unable to recover it. 00:29:26.978 [2024-07-12 11:07:43.687417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.978 [2024-07-12 11:07:43.687449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.978 qpair failed and we were unable to recover it. 00:29:26.978 [2024-07-12 11:07:43.687950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.978 [2024-07-12 11:07:43.687980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.978 qpair failed and we were unable to recover it. 00:29:26.978 [2024-07-12 11:07:43.688271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.978 [2024-07-12 11:07:43.688301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.978 qpair failed and we were unable to recover it. 00:29:26.978 [2024-07-12 11:07:43.688721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.978 [2024-07-12 11:07:43.688749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.978 qpair failed and we were unable to recover it. 00:29:26.978 [2024-07-12 11:07:43.689033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.978 [2024-07-12 11:07:43.689060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.978 qpair failed and we were unable to recover it. 00:29:26.978 [2024-07-12 11:07:43.689477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.978 [2024-07-12 11:07:43.689506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.978 qpair failed and we were unable to recover it. 00:29:26.978 [2024-07-12 11:07:43.689926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.978 [2024-07-12 11:07:43.689955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.978 qpair failed and we were unable to recover it. 00:29:26.978 [2024-07-12 11:07:43.690291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.978 [2024-07-12 11:07:43.690320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.978 qpair failed and we were unable to recover it. 00:29:26.978 [2024-07-12 11:07:43.690716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.978 [2024-07-12 11:07:43.690745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.978 qpair failed and we were unable to recover it. 00:29:26.978 [2024-07-12 11:07:43.691171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.978 [2024-07-12 11:07:43.691201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.978 qpair failed and we were unable to recover it. 00:29:26.978 [2024-07-12 11:07:43.691617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.978 [2024-07-12 11:07:43.691646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.978 qpair failed and we were unable to recover it. 00:29:26.978 [2024-07-12 11:07:43.692072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.978 [2024-07-12 11:07:43.692099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.978 qpair failed and we were unable to recover it. 00:29:26.978 [2024-07-12 11:07:43.692486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.978 [2024-07-12 11:07:43.692516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.978 qpair failed and we were unable to recover it. 00:29:26.978 [2024-07-12 11:07:43.692939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.978 [2024-07-12 11:07:43.692969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.978 qpair failed and we were unable to recover it. 00:29:26.978 [2024-07-12 11:07:43.693280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.978 [2024-07-12 11:07:43.693316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.978 qpair failed and we were unable to recover it. 00:29:26.978 [2024-07-12 11:07:43.693753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.978 [2024-07-12 11:07:43.693782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.978 qpair failed and we were unable to recover it. 00:29:26.978 [2024-07-12 11:07:43.694208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.978 [2024-07-12 11:07:43.694237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.978 qpair failed and we were unable to recover it. 00:29:26.978 [2024-07-12 11:07:43.694684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.978 [2024-07-12 11:07:43.694712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.978 qpair failed and we were unable to recover it. 00:29:26.978 [2024-07-12 11:07:43.695159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.978 [2024-07-12 11:07:43.695191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.978 qpair failed and we were unable to recover it. 00:29:26.978 [2024-07-12 11:07:43.695489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.978 [2024-07-12 11:07:43.695520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.978 qpair failed and we were unable to recover it. 00:29:26.978 [2024-07-12 11:07:43.695941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.978 [2024-07-12 11:07:43.695969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.978 qpair failed and we were unable to recover it. 00:29:26.978 [2024-07-12 11:07:43.696428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.978 [2024-07-12 11:07:43.696458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.978 qpair failed and we were unable to recover it. 00:29:26.978 [2024-07-12 11:07:43.696866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.978 [2024-07-12 11:07:43.696895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.978 qpair failed and we were unable to recover it. 00:29:26.978 [2024-07-12 11:07:43.697298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.978 [2024-07-12 11:07:43.697328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.978 qpair failed and we were unable to recover it. 00:29:26.978 [2024-07-12 11:07:43.697572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.697602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.698094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.698133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.698569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.698598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.698884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.698926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.699226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.699259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.699680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.699710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.700143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.700172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.700651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.700680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.701001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.701029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.701515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.701545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.701854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.701882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.702285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.702316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.702770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.702798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.703063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.703090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.703585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.703615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.704063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.704092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.704513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.704543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.704965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.704993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.705304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.705333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.705767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.705796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.706217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.706247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.706672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.706700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.707143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.707172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.707662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.707690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.707967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.707995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.708433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.708462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.708818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.708846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.709267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.709298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.709739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.709768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.710196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.710225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.710531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.710561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.710836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.710864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.711157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.711187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.711611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.711640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.712068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.712096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.712526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.712555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.712983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.713011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.713341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.713373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.713810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.713839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.714251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.979 [2024-07-12 11:07:43.714280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.979 qpair failed and we were unable to recover it. 00:29:26.979 [2024-07-12 11:07:43.714590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.714617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.715042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.715070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.715481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.715511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.715954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.715989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.716387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.716418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.716705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.716736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.717174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.717205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.717582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.717609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.717927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.717955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.718372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.718402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.718703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.718731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.719170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.719200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.719550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.719577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.719880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.719909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.720314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.720343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.720761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.720791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.721208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.721238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.721675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.721704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.722151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.722181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.722500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.722528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.722957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.722986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.723419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.723448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.723876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.723904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.724319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.724348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.724776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.724804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.725219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.725249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.725682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.725710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.726141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.726172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.726578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.726606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.727046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.727074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.727563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.727594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.727988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.728015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.728445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.728475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.728912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.728940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.729392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.729423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.729846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.729876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.730350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.730379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.730811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.730839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.731162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.731193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.731509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.980 [2024-07-12 11:07:43.731537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.980 qpair failed and we were unable to recover it. 00:29:26.980 [2024-07-12 11:07:43.731950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.731979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.732502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.732532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.732922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.732950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.733251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.733299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.733628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.733656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.734081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.734110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.734450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.734480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.734932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.734963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.735270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.735302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.735598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.735635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.736057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.736085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.736560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.736590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.737020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.737049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.737424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.737464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.737880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.737908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.738310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.738338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.738755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.738784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.739188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.739221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.739619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.739647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.740079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.740107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.740433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.740465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.740877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.740906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.741336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.741367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.741793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.741821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.742244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.742273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.742717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.742745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.743173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.743204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.743618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.743647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.744086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.744114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.744539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.744568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.744978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.745006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.745413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.745443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.745910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.745939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.746391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.746420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.746854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.746881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.747304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.747333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.747725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.747753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.748248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.748279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.748726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.748756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.749174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.749203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.981 [2024-07-12 11:07:43.749629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.981 [2024-07-12 11:07:43.749657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.981 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.750063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.750092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.750514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.750542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.750973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.751006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.751405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.751434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.751721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.751752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.752166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.752196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.752642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.752670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.753138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.753168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.753637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.753665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.754097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.754139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.754579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.754607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.755023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.755054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.755476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.755506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.755950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.755979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.756401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.756429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.756840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.756867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.757298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.757328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.757751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.757779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.758205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.758234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.758704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.758733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.759160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.759191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.759656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.759685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.760112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.760152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.760542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.760570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.761017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.761044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.761440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.761470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.761905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.761933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.762233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.762264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.762697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.762726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.763153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.763183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.763493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.763524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.763947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.763974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.764476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.764506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.764858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.764887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.765312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.765341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.765774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.765802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.766225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.766254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.766680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.766708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.767010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.767037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.767358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.982 [2024-07-12 11:07:43.767389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.982 qpair failed and we were unable to recover it. 00:29:26.982 [2024-07-12 11:07:43.767818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.983 [2024-07-12 11:07:43.767845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.983 qpair failed and we were unable to recover it. 00:29:26.983 [2024-07-12 11:07:43.768271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.983 [2024-07-12 11:07:43.768302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.983 qpair failed and we were unable to recover it. 00:29:26.983 [2024-07-12 11:07:43.768744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.983 [2024-07-12 11:07:43.768781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.983 qpair failed and we were unable to recover it. 00:29:26.983 [2024-07-12 11:07:43.769204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.983 [2024-07-12 11:07:43.769233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.983 qpair failed and we were unable to recover it. 00:29:26.983 [2024-07-12 11:07:43.769657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.983 [2024-07-12 11:07:43.769686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.983 qpair failed and we were unable to recover it. 00:29:26.983 [2024-07-12 11:07:43.770119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.983 [2024-07-12 11:07:43.770159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.983 qpair failed and we were unable to recover it. 00:29:26.983 [2024-07-12 11:07:43.770586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.983 [2024-07-12 11:07:43.770615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.983 qpair failed and we were unable to recover it. 00:29:26.983 [2024-07-12 11:07:43.771044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.983 [2024-07-12 11:07:43.771072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.983 qpair failed and we were unable to recover it. 00:29:26.983 [2024-07-12 11:07:43.771493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.983 [2024-07-12 11:07:43.771523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.983 qpair failed and we were unable to recover it. 00:29:26.983 [2024-07-12 11:07:43.771944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.983 [2024-07-12 11:07:43.771973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.983 qpair failed and we were unable to recover it. 00:29:26.983 [2024-07-12 11:07:43.772398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.983 [2024-07-12 11:07:43.772427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.983 qpair failed and we were unable to recover it. 00:29:26.983 [2024-07-12 11:07:43.772852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.983 [2024-07-12 11:07:43.772880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.983 qpair failed and we were unable to recover it. 00:29:26.983 [2024-07-12 11:07:43.773310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.983 [2024-07-12 11:07:43.773339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.983 qpair failed and we were unable to recover it. 00:29:26.983 [2024-07-12 11:07:43.773766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.983 [2024-07-12 11:07:43.773794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.983 qpair failed and we were unable to recover it. 00:29:26.983 [2024-07-12 11:07:43.774221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.983 [2024-07-12 11:07:43.774250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.983 qpair failed and we were unable to recover it. 00:29:26.983 [2024-07-12 11:07:43.774642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.983 [2024-07-12 11:07:43.774671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.983 qpair failed and we were unable to recover it. 00:29:26.983 [2024-07-12 11:07:43.775094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.983 [2024-07-12 11:07:43.775134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.983 qpair failed and we were unable to recover it. 00:29:26.983 [2024-07-12 11:07:43.775411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.983 [2024-07-12 11:07:43.775441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.983 qpair failed and we were unable to recover it. 00:29:26.983 [2024-07-12 11:07:43.775860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.983 [2024-07-12 11:07:43.775889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.983 qpair failed and we were unable to recover it. 00:29:26.983 [2024-07-12 11:07:43.776320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.983 [2024-07-12 11:07:43.776351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.983 qpair failed and we were unable to recover it. 00:29:26.983 [2024-07-12 11:07:43.776764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.983 [2024-07-12 11:07:43.776792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.983 qpair failed and we were unable to recover it. 00:29:26.983 [2024-07-12 11:07:43.777215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.983 [2024-07-12 11:07:43.777244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.983 qpair failed and we were unable to recover it. 00:29:26.983 [2024-07-12 11:07:43.777690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.983 [2024-07-12 11:07:43.777719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.983 qpair failed and we were unable to recover it. 00:29:26.983 [2024-07-12 11:07:43.778074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.983 [2024-07-12 11:07:43.778103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.983 qpair failed and we were unable to recover it. 00:29:26.983 [2024-07-12 11:07:43.778419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.983 [2024-07-12 11:07:43.778448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.983 qpair failed and we were unable to recover it. 00:29:26.983 [2024-07-12 11:07:43.778879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.983 [2024-07-12 11:07:43.778906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.983 qpair failed and we were unable to recover it. 00:29:26.983 [2024-07-12 11:07:43.779334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.983 [2024-07-12 11:07:43.779364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.983 qpair failed and we were unable to recover it. 00:29:26.983 [2024-07-12 11:07:43.779687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.983 [2024-07-12 11:07:43.779714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.983 qpair failed and we were unable to recover it. 00:29:26.983 [2024-07-12 11:07:43.780140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.983 [2024-07-12 11:07:43.780171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.983 qpair failed and we were unable to recover it. 00:29:26.983 [2024-07-12 11:07:43.780642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.983 [2024-07-12 11:07:43.780672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.983 qpair failed and we were unable to recover it. 00:29:26.983 [2024-07-12 11:07:43.781078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.983 [2024-07-12 11:07:43.781106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.983 qpair failed and we were unable to recover it. 00:29:26.983 [2024-07-12 11:07:43.781519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.983 [2024-07-12 11:07:43.781548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.983 qpair failed and we were unable to recover it. 00:29:26.983 [2024-07-12 11:07:43.781962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.983 [2024-07-12 11:07:43.781989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.983 qpair failed and we were unable to recover it. 00:29:26.983 [2024-07-12 11:07:43.782415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.983 [2024-07-12 11:07:43.782445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.983 qpair failed and we were unable to recover it. 00:29:26.983 [2024-07-12 11:07:43.782759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.983 [2024-07-12 11:07:43.782789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.983 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.783226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.783255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.783676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.783704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.784135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.784164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.784620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.784647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.785077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.785106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.785524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.785553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.785979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.786007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.786424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.786465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.786885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.786914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.787334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.787364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.787771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.787798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.788233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.788262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.788687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.788715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.789019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.789047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.789491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.789520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.789823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.789856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.790298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.790327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.790616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.790643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.791041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.791068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.791485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.791514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.791945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.791973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.792389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.792419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.792849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.792878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.793297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.793327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.793759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.793786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.794223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.794254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.794678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.794708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.795166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.795197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.795618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.795646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.796073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.796102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.796534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.796563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.797037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.797065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.797490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.797519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.797945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.797974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.798408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.798440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.798860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.798889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.799331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.799360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.799784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.799812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.800240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.800269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.800679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.800707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.984 qpair failed and we were unable to recover it. 00:29:26.984 [2024-07-12 11:07:43.801145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.984 [2024-07-12 11:07:43.801175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.801615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.801643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.802066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.802094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.802528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.802558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.803037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.803066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.803540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.803570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.804004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.804032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.804468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.804504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.804910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.804938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.805448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.805552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.806062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.806097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.806594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.806623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.807030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.807060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.807474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.807505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.807897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.807926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.808348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.808378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.808820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.808847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.809273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.809303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.809640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.809667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.810095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.810135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.810555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.810583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.810996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.811025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.811428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.811457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.811885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.811914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.812243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.812282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.812690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.812720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.813190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.813245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.813686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.813714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.814169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.814200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.814634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.814664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.814987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.815015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.815417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.815448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.815888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.815916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.816232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.816272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.816755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.816788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.817324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.817429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.817951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.817987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.818460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.818491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.818902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.818931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.819346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.819377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.819807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.985 [2024-07-12 11:07:43.819835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.985 qpair failed and we were unable to recover it. 00:29:26.985 [2024-07-12 11:07:43.820247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.820278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.820717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.820747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.821213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.821242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.821649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.821679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.822100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.822140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.822542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.822572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.822989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.823028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.823491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.823521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.823946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.823975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.824301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.824341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.824781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.824809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.825245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.825274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.825694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.825722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.826159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.826189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.826612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.826641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.827060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.827089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.827518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.827548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.828012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.828040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.828353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.828383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.828816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.828845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.829289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.829318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.829743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.829772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.830071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.830104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.830550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.830580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.830982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.831011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.831446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.831476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.831894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.831923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.832236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.832270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.832713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.832741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.833170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.833198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.833488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.833525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.833959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.833987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.834401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.834431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.834858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.834887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.835313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.835342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.835778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.835807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.836222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.836252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.836744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.836773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.837218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.837248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.837677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.837705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.986 qpair failed and we were unable to recover it. 00:29:26.986 [2024-07-12 11:07:43.838060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.986 [2024-07-12 11:07:43.838088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.987 qpair failed and we were unable to recover it. 00:29:26.987 [2024-07-12 11:07:43.838562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.987 [2024-07-12 11:07:43.838591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.987 qpair failed and we were unable to recover it. 00:29:26.987 [2024-07-12 11:07:43.839006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.987 [2024-07-12 11:07:43.839033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.987 qpair failed and we were unable to recover it. 00:29:26.987 [2024-07-12 11:07:43.839463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.987 [2024-07-12 11:07:43.839493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.987 qpair failed and we were unable to recover it. 00:29:26.987 [2024-07-12 11:07:43.839928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.987 [2024-07-12 11:07:43.839956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.987 qpair failed and we were unable to recover it. 00:29:26.987 [2024-07-12 11:07:43.840388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.987 [2024-07-12 11:07:43.840417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.987 qpair failed and we were unable to recover it. 00:29:26.987 [2024-07-12 11:07:43.840841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.987 [2024-07-12 11:07:43.840876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.987 qpair failed and we were unable to recover it. 00:29:26.987 [2024-07-12 11:07:43.841348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.987 [2024-07-12 11:07:43.841377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.987 qpair failed and we were unable to recover it. 00:29:26.987 [2024-07-12 11:07:43.841720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.987 [2024-07-12 11:07:43.841755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.987 qpair failed and we were unable to recover it. 00:29:26.987 [2024-07-12 11:07:43.842152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.987 [2024-07-12 11:07:43.842182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.987 qpair failed and we were unable to recover it. 00:29:26.987 [2024-07-12 11:07:43.842503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.987 [2024-07-12 11:07:43.842536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.987 qpair failed and we were unable to recover it. 00:29:26.987 [2024-07-12 11:07:43.842927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.987 [2024-07-12 11:07:43.842955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.987 qpair failed and we were unable to recover it. 00:29:26.987 [2024-07-12 11:07:43.843384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.987 [2024-07-12 11:07:43.843413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.987 qpair failed and we were unable to recover it. 00:29:26.987 [2024-07-12 11:07:43.843840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.987 [2024-07-12 11:07:43.843869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.987 qpair failed and we were unable to recover it. 00:29:26.987 [2024-07-12 11:07:43.844305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.987 [2024-07-12 11:07:43.844335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.987 qpair failed and we were unable to recover it. 00:29:26.987 [2024-07-12 11:07:43.844764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.987 [2024-07-12 11:07:43.844792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.987 qpair failed and we were unable to recover it. 00:29:26.987 [2024-07-12 11:07:43.845221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.987 [2024-07-12 11:07:43.845250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.987 qpair failed and we were unable to recover it. 00:29:26.987 [2024-07-12 11:07:43.845664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.987 [2024-07-12 11:07:43.845693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.987 qpair failed and we were unable to recover it. 00:29:26.987 [2024-07-12 11:07:43.846187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.987 [2024-07-12 11:07:43.846217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.987 qpair failed and we were unable to recover it. 00:29:26.987 [2024-07-12 11:07:43.846639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.987 [2024-07-12 11:07:43.846667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.987 qpair failed and we were unable to recover it. 00:29:26.987 [2024-07-12 11:07:43.847100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.987 [2024-07-12 11:07:43.847138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.987 qpair failed and we were unable to recover it. 00:29:26.987 [2024-07-12 11:07:43.847545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.987 [2024-07-12 11:07:43.847574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.987 qpair failed and we were unable to recover it. 00:29:26.987 [2024-07-12 11:07:43.847985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.987 [2024-07-12 11:07:43.848013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.987 qpair failed and we were unable to recover it. 00:29:26.987 [2024-07-12 11:07:43.848417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.987 [2024-07-12 11:07:43.848446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.987 qpair failed and we were unable to recover it. 00:29:26.987 [2024-07-12 11:07:43.848868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.987 [2024-07-12 11:07:43.848896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.987 qpair failed and we were unable to recover it. 00:29:26.987 [2024-07-12 11:07:43.849409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.987 [2024-07-12 11:07:43.849437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.987 qpair failed and we were unable to recover it. 00:29:26.987 [2024-07-12 11:07:43.849842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.987 [2024-07-12 11:07:43.849871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.987 qpair failed and we were unable to recover it. 00:29:26.987 [2024-07-12 11:07:43.850292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.987 [2024-07-12 11:07:43.850322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.987 qpair failed and we were unable to recover it. 00:29:26.987 [2024-07-12 11:07:43.850747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.987 [2024-07-12 11:07:43.850775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.987 qpair failed and we were unable to recover it. 00:29:26.987 [2024-07-12 11:07:43.851214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.987 [2024-07-12 11:07:43.851244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.987 qpair failed and we were unable to recover it. 00:29:26.987 [2024-07-12 11:07:43.851662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.987 [2024-07-12 11:07:43.851691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.987 qpair failed and we were unable to recover it. 00:29:26.987 [2024-07-12 11:07:43.852119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.987 [2024-07-12 11:07:43.852161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.987 qpair failed and we were unable to recover it. 00:29:26.987 [2024-07-12 11:07:43.852478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.987 [2024-07-12 11:07:43.852507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.987 qpair failed and we were unable to recover it. 00:29:26.987 [2024-07-12 11:07:43.852997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.987 [2024-07-12 11:07:43.853027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.987 qpair failed and we were unable to recover it. 00:29:26.987 [2024-07-12 11:07:43.853447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.987 [2024-07-12 11:07:43.853476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.987 qpair failed and we were unable to recover it. 00:29:26.987 [2024-07-12 11:07:43.853908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.987 [2024-07-12 11:07:43.853936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.854363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.854392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.854846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.854875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.855306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.855336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.855766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.855795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.856227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.856257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.856674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.856703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.857142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.857172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.857576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.857604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.858014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.858042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.858458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.858488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.858904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.858938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.859356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.859385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.859806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.859835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.860272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.860302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.860711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.860739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.861151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.861180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.861643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.861671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.862100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.862149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.862488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.862515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.862926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.862955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.863435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.863465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.863896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.863925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.864438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.864540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.865017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.865051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.865575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.865607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.866010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.866038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.866451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.866480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.866892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.866920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.867349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.867379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.867814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.867843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.868265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.868295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.868722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.868750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.869170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.869199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.869644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.869672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.869982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.870011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.870462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.870492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.870907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.870937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.871235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.871287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.988 [2024-07-12 11:07:43.871692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.988 [2024-07-12 11:07:43.871720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.988 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.872143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.872173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.872573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.872602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.873031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.873059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.873497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.873526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.873951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.873979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.874394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.874423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.874848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.874876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.875297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.875328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.875800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.875828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.876221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.876251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.876678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.876707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.877150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.877192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.877600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.877628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.878040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.878067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.878494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.878524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.878898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.878926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.879341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.879370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.879816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.879844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.880269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.880298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.880754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.880782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.881215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.881245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.881657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.881686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.882007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.882035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.882440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.882469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.882851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.882879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.883294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.883325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.883753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.883782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.884209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.884237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.884671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.884699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.885016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.885050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.885483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.885512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.885946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.885974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.886386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.886415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.886850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.886878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.887301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.887331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.887717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.887745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.888156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.888184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.888610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.888638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.889069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.889098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.989 [2024-07-12 11:07:43.889580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.989 [2024-07-12 11:07:43.889609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.989 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.890109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.890153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.890590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.890619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.890971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.890999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.891419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.891449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.891880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.891908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.892339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.892368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.892778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.892806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.893238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.893267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.893674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.893702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.894047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.894074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.894504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.894533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.894951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.894986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.895378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.895410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.895840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.895868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.896290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.896320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.896760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.896787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.897199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.897228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.897736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.897764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.898111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.898159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.898617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.898645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.899006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.899034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.899460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.899491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.899921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.899949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.900379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.900407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.900725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.900757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.901237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.901266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.901572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.901603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.902031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.902059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.902416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.902445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.902752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.902779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.903141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.903171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.903595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.903624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.904013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.904041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.904375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.904404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.904819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.904849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.905266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.905296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.905729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.905757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.906182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.906212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.990 qpair failed and we were unable to recover it. 00:29:26.990 [2024-07-12 11:07:43.906641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.990 [2024-07-12 11:07:43.906670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.991 qpair failed and we were unable to recover it. 00:29:26.991 [2024-07-12 11:07:43.907093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.991 [2024-07-12 11:07:43.907131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.991 qpair failed and we were unable to recover it. 00:29:26.991 [2024-07-12 11:07:43.907549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.991 [2024-07-12 11:07:43.907578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.991 qpair failed and we were unable to recover it. 00:29:26.991 [2024-07-12 11:07:43.907998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.991 [2024-07-12 11:07:43.908027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.991 qpair failed and we were unable to recover it. 00:29:26.991 [2024-07-12 11:07:43.908441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.991 [2024-07-12 11:07:43.908471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.991 qpair failed and we were unable to recover it. 00:29:26.991 [2024-07-12 11:07:43.908895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.991 [2024-07-12 11:07:43.908923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.991 qpair failed and we were unable to recover it. 00:29:26.991 [2024-07-12 11:07:43.909269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.991 [2024-07-12 11:07:43.909299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.991 qpair failed and we were unable to recover it. 00:29:26.991 [2024-07-12 11:07:43.909613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.991 [2024-07-12 11:07:43.909641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.991 qpair failed and we were unable to recover it. 00:29:26.991 [2024-07-12 11:07:43.910068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.991 [2024-07-12 11:07:43.910096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.991 qpair failed and we were unable to recover it. 00:29:26.991 [2024-07-12 11:07:43.910320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.991 [2024-07-12 11:07:43.910355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.991 qpair failed and we were unable to recover it. 00:29:26.991 [2024-07-12 11:07:43.910807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.991 [2024-07-12 11:07:43.910835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.991 qpair failed and we were unable to recover it. 00:29:26.991 [2024-07-12 11:07:43.911165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.991 [2024-07-12 11:07:43.911195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.991 qpair failed and we were unable to recover it. 00:29:26.991 [2024-07-12 11:07:43.911624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.991 [2024-07-12 11:07:43.911652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.991 qpair failed and we were unable to recover it. 00:29:26.991 [2024-07-12 11:07:43.912084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.991 [2024-07-12 11:07:43.912113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.991 qpair failed and we were unable to recover it. 00:29:26.991 [2024-07-12 11:07:43.912598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.991 [2024-07-12 11:07:43.912628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.991 qpair failed and we were unable to recover it. 00:29:26.991 [2024-07-12 11:07:43.913056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.991 [2024-07-12 11:07:43.913084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.991 qpair failed and we were unable to recover it. 00:29:26.991 [2024-07-12 11:07:43.913425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.991 [2024-07-12 11:07:43.913454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.991 qpair failed and we were unable to recover it. 00:29:26.991 [2024-07-12 11:07:43.913880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.991 [2024-07-12 11:07:43.913908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.991 qpair failed and we were unable to recover it. 00:29:26.991 [2024-07-12 11:07:43.914345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.991 [2024-07-12 11:07:43.914376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.991 qpair failed and we were unable to recover it. 00:29:26.991 [2024-07-12 11:07:43.914846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.991 [2024-07-12 11:07:43.914875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.991 qpair failed and we were unable to recover it. 00:29:26.991 [2024-07-12 11:07:43.915079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.991 [2024-07-12 11:07:43.915110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.991 qpair failed and we were unable to recover it. 00:29:26.991 [2024-07-12 11:07:43.915450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.991 [2024-07-12 11:07:43.915479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.991 qpair failed and we were unable to recover it. 00:29:26.991 [2024-07-12 11:07:43.915902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.991 [2024-07-12 11:07:43.915930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.991 qpair failed and we were unable to recover it. 00:29:26.991 [2024-07-12 11:07:43.916361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.991 [2024-07-12 11:07:43.916391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.991 qpair failed and we were unable to recover it. 00:29:26.991 [2024-07-12 11:07:43.916807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.991 [2024-07-12 11:07:43.916837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.991 qpair failed and we were unable to recover it. 00:29:26.991 [2024-07-12 11:07:43.917162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.991 [2024-07-12 11:07:43.917191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.991 qpair failed and we were unable to recover it. 00:29:26.991 [2024-07-12 11:07:43.917596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.991 [2024-07-12 11:07:43.917624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.991 qpair failed and we were unable to recover it. 00:29:26.991 [2024-07-12 11:07:43.918148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.991 [2024-07-12 11:07:43.918179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.991 qpair failed and we were unable to recover it. 00:29:26.991 [2024-07-12 11:07:43.918604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.991 [2024-07-12 11:07:43.918633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.991 qpair failed and we were unable to recover it. 00:29:26.991 [2024-07-12 11:07:43.919063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.991 [2024-07-12 11:07:43.919092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.991 qpair failed and we were unable to recover it. 00:29:26.991 [2024-07-12 11:07:43.919507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.991 [2024-07-12 11:07:43.919538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.991 qpair failed and we were unable to recover it. 00:29:26.991 [2024-07-12 11:07:43.919957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.991 [2024-07-12 11:07:43.919985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.991 qpair failed and we were unable to recover it. 00:29:26.991 [2024-07-12 11:07:43.920417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.991 [2024-07-12 11:07:43.920447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.991 qpair failed and we were unable to recover it. 00:29:26.991 [2024-07-12 11:07:43.920779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.991 [2024-07-12 11:07:43.920808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.991 qpair failed and we were unable to recover it. 00:29:26.991 [2024-07-12 11:07:43.921250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.991 [2024-07-12 11:07:43.921279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.991 qpair failed and we were unable to recover it. 00:29:26.991 [2024-07-12 11:07:43.921579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.921609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.922058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.922086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.922487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.922517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.922933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.922961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.923275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.923303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.923700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.923736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.924186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.924215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.924552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.924579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.924888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.924916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.925318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.925347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.925761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.925789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.926217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.926247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.926618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.926646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.927080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.927108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.927586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.927615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.927911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.927938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.928383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.928414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.928846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.928874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.929308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.929337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.929778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.929807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.930232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.930262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.930665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.930693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.931133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.931163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.931620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.931650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.932067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.932096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.932580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.932610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.933035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.933063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.933477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.933507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.933930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.933959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.934492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.934595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.935154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.935192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.935713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.935742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.936395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.936498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.937017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.937052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.937457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.937488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.937987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.938015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.938417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.938446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.938744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.938772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.992 qpair failed and we were unable to recover it. 00:29:26.992 [2024-07-12 11:07:43.939048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.992 [2024-07-12 11:07:43.939075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.993 qpair failed and we were unable to recover it. 00:29:26.993 [2024-07-12 11:07:43.939480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.993 [2024-07-12 11:07:43.939509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.993 qpair failed and we were unable to recover it. 00:29:26.993 [2024-07-12 11:07:43.939923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.993 [2024-07-12 11:07:43.939951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.993 qpair failed and we were unable to recover it. 00:29:26.993 [2024-07-12 11:07:43.940392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.993 [2024-07-12 11:07:43.940422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.993 qpair failed and we were unable to recover it. 00:29:26.993 [2024-07-12 11:07:43.940832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.993 [2024-07-12 11:07:43.940861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.993 qpair failed and we were unable to recover it. 00:29:26.993 [2024-07-12 11:07:43.941275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.993 [2024-07-12 11:07:43.941305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.993 qpair failed and we were unable to recover it. 00:29:26.993 [2024-07-12 11:07:43.941614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.993 [2024-07-12 11:07:43.941642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.993 qpair failed and we were unable to recover it. 00:29:26.993 [2024-07-12 11:07:43.941911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.993 [2024-07-12 11:07:43.941950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.993 qpair failed and we were unable to recover it. 00:29:26.993 [2024-07-12 11:07:43.942368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.993 [2024-07-12 11:07:43.942398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.993 qpair failed and we were unable to recover it. 00:29:26.993 [2024-07-12 11:07:43.942833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.993 [2024-07-12 11:07:43.942863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.993 qpair failed and we were unable to recover it. 00:29:26.993 [2024-07-12 11:07:43.943265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.993 [2024-07-12 11:07:43.943295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.993 qpair failed and we were unable to recover it. 00:29:26.993 [2024-07-12 11:07:43.943734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.993 [2024-07-12 11:07:43.943762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.993 qpair failed and we were unable to recover it. 00:29:26.993 [2024-07-12 11:07:43.944052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.993 [2024-07-12 11:07:43.944080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.993 qpair failed and we were unable to recover it. 00:29:26.993 [2024-07-12 11:07:43.944332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.993 [2024-07-12 11:07:43.944361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:26.993 qpair failed and we were unable to recover it. 00:29:27.266 [2024-07-12 11:07:43.944687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.266 [2024-07-12 11:07:43.944721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.266 qpair failed and we were unable to recover it. 00:29:27.266 [2024-07-12 11:07:43.945159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.266 [2024-07-12 11:07:43.945190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.266 qpair failed and we were unable to recover it. 00:29:27.266 [2024-07-12 11:07:43.945609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.266 [2024-07-12 11:07:43.945637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.266 qpair failed and we were unable to recover it. 00:29:27.266 [2024-07-12 11:07:43.945964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.266 [2024-07-12 11:07:43.945995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.266 qpair failed and we were unable to recover it. 00:29:27.266 [2024-07-12 11:07:43.946303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.266 [2024-07-12 11:07:43.946332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.266 qpair failed and we were unable to recover it. 00:29:27.266 [2024-07-12 11:07:43.946775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.266 [2024-07-12 11:07:43.946803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.266 qpair failed and we were unable to recover it. 00:29:27.266 [2024-07-12 11:07:43.947239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.266 [2024-07-12 11:07:43.947268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.266 qpair failed and we were unable to recover it. 00:29:27.266 [2024-07-12 11:07:43.947712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.266 [2024-07-12 11:07:43.947742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.266 qpair failed and we were unable to recover it. 00:29:27.266 [2024-07-12 11:07:43.948164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.266 [2024-07-12 11:07:43.948193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.266 qpair failed and we were unable to recover it. 00:29:27.266 [2024-07-12 11:07:43.948592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.266 [2024-07-12 11:07:43.948620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.266 qpair failed and we were unable to recover it. 00:29:27.266 [2024-07-12 11:07:43.949042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.266 [2024-07-12 11:07:43.949070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.266 qpair failed and we were unable to recover it. 00:29:27.266 [2024-07-12 11:07:43.949345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.266 [2024-07-12 11:07:43.949374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.266 qpair failed and we were unable to recover it. 00:29:27.266 [2024-07-12 11:07:43.949813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.266 [2024-07-12 11:07:43.949841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.266 qpair failed and we were unable to recover it. 00:29:27.266 [2024-07-12 11:07:43.950271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.266 [2024-07-12 11:07:43.950302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.266 qpair failed and we were unable to recover it. 00:29:27.266 [2024-07-12 11:07:43.950747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.266 [2024-07-12 11:07:43.950776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.266 qpair failed and we were unable to recover it. 00:29:27.266 [2024-07-12 11:07:43.951220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.266 [2024-07-12 11:07:43.951249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.266 qpair failed and we were unable to recover it. 00:29:27.266 [2024-07-12 11:07:43.951676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.266 [2024-07-12 11:07:43.951704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.266 qpair failed and we were unable to recover it. 00:29:27.266 [2024-07-12 11:07:43.952140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.266 [2024-07-12 11:07:43.952170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.266 qpair failed and we were unable to recover it. 00:29:27.266 [2024-07-12 11:07:43.952524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.266 [2024-07-12 11:07:43.952552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.266 qpair failed and we were unable to recover it. 00:29:27.266 [2024-07-12 11:07:43.953052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.266 [2024-07-12 11:07:43.953080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.266 qpair failed and we were unable to recover it. 00:29:27.266 [2024-07-12 11:07:43.953317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.266 [2024-07-12 11:07:43.953347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.266 qpair failed and we were unable to recover it. 00:29:27.266 [2024-07-12 11:07:43.953648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.266 [2024-07-12 11:07:43.953677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.266 qpair failed and we were unable to recover it. 00:29:27.266 [2024-07-12 11:07:43.954110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.266 [2024-07-12 11:07:43.954152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.266 qpair failed and we were unable to recover it. 00:29:27.266 [2024-07-12 11:07:43.954617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.266 [2024-07-12 11:07:43.954645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.266 qpair failed and we were unable to recover it. 00:29:27.266 [2024-07-12 11:07:43.954959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.266 [2024-07-12 11:07:43.954987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.266 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.955417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.955447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.955877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.955906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.956415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.956444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.956730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.956757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.957219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.957248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.957586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.957614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.957738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.957764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.958073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.958106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.958623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.958659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.958932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.958959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.959247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.959277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.959692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.959720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.960149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.960179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.960604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.960632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.961060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.961088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.961559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.961587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.962017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.962045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.962510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.962539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.962970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.962997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.963418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.963448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.963694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.963721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.964144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.964173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.964653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.964683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.965094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.965134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.965466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.965498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.965886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.965914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.966340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.966370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.966828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.966856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.967266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.967295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.967714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.967743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.968172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.968201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.968696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.968723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.969156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.969185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.969621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.969649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.970083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.970111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.970543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.970572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.970999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.971026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.971335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.971363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.971777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.971806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.267 [2024-07-12 11:07:43.972234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.267 [2024-07-12 11:07:43.972262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.267 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.972694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.972722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.973096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.973132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.973601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.973629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.974055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.974082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.974560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.974590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.975024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.975051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.975403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.975434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.975878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.975906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.976341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.976376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.976818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.976846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.977263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.977292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.977679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.977708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.978025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.978052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.978487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.978515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.978826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.978854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.979278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.979308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.979752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.979781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.980213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.980243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.980586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.980614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.981040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.981068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.981491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.981520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.981942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.981969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.982289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.982320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.982728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.982756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.983188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.983218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.983645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.983673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.984085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.984112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.984587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.984616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.985039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.985068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.985470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.985500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.985931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.985959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.986379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.986408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.986835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.986863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.987289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.987319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.987738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.987766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.988261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.988290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.988716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.988744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.989191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.989220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.989676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.989705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.990144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.268 [2024-07-12 11:07:43.990174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.268 qpair failed and we were unable to recover it. 00:29:27.268 [2024-07-12 11:07:43.990592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:43.990620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:43.991045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:43.991072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:43.991489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:43.991519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:43.991928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:43.991956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:43.992270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:43.992303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:43.992695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:43.992724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:43.993154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:43.993183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:43.993612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:43.993640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:43.994069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:43.994104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:43.994521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:43.994550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:43.994970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:43.994998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:43.995432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:43.995462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:43.995892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:43.995920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:43.996276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:43.996305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:43.996726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:43.996755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:43.997160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:43.997189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:43.997588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:43.997616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:43.998046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:43.998075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:43.998537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:43.998567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:43.999004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:43.999033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:43.999459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:43.999489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:43.999910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:43.999939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:44.000363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:44.000393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:44.000818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:44.000846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:44.001141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:44.001174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:44.001623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:44.001652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:44.002076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:44.002103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:44.002606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:44.002636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:44.003042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:44.003070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:44.003487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:44.003516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:44.003815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:44.003852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:44.004137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:44.004169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:44.004503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:44.004530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:44.004952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:44.004980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:44.005439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:44.005469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:44.005869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:44.005897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:44.006214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:44.006246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:44.006681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:44.006709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:44.007143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:44.007172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.269 [2024-07-12 11:07:44.007464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.269 [2024-07-12 11:07:44.007492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.269 qpair failed and we were unable to recover it. 00:29:27.270 [2024-07-12 11:07:44.007780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.270 [2024-07-12 11:07:44.007810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.270 qpair failed and we were unable to recover it. 00:29:27.270 [2024-07-12 11:07:44.008246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.270 [2024-07-12 11:07:44.008275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.270 qpair failed and we were unable to recover it. 00:29:27.270 [2024-07-12 11:07:44.008709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.270 [2024-07-12 11:07:44.008737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.270 qpair failed and we were unable to recover it. 00:29:27.270 [2024-07-12 11:07:44.009153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.270 [2024-07-12 11:07:44.009183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.270 qpair failed and we were unable to recover it. 00:29:27.270 [2024-07-12 11:07:44.009599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.270 [2024-07-12 11:07:44.009628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.270 qpair failed and we were unable to recover it. 00:29:27.270 [2024-07-12 11:07:44.010071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.270 [2024-07-12 11:07:44.010100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.270 qpair failed and we were unable to recover it. 00:29:27.270 [2024-07-12 11:07:44.010552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.270 [2024-07-12 11:07:44.010581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.270 qpair failed and we were unable to recover it. 00:29:27.270 [2024-07-12 11:07:44.010999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.270 [2024-07-12 11:07:44.011028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.270 qpair failed and we were unable to recover it. 00:29:27.270 [2024-07-12 11:07:44.011437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.270 [2024-07-12 11:07:44.011473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.270 qpair failed and we were unable to recover it. 00:29:27.270 [2024-07-12 11:07:44.011875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.270 [2024-07-12 11:07:44.011902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.270 qpair failed and we were unable to recover it. 00:29:27.270 [2024-07-12 11:07:44.012404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.270 [2024-07-12 11:07:44.012433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.270 qpair failed and we were unable to recover it. 00:29:27.270 [2024-07-12 11:07:44.012826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.270 [2024-07-12 11:07:44.012854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.270 qpair failed and we were unable to recover it. 00:29:27.270 [2024-07-12 11:07:44.013271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.270 [2024-07-12 11:07:44.013301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.270 qpair failed and we were unable to recover it. 00:29:27.270 [2024-07-12 11:07:44.013710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.270 [2024-07-12 11:07:44.013739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.270 qpair failed and we were unable to recover it. 00:29:27.270 [2024-07-12 11:07:44.014158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.270 [2024-07-12 11:07:44.014187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.270 qpair failed and we were unable to recover it. 00:29:27.270 [2024-07-12 11:07:44.014608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.270 [2024-07-12 11:07:44.014637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.270 qpair failed and we were unable to recover it. 00:29:27.270 [2024-07-12 11:07:44.015048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.270 [2024-07-12 11:07:44.015076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.270 qpair failed and we were unable to recover it. 00:29:27.270 [2024-07-12 11:07:44.015525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.270 [2024-07-12 11:07:44.015553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.270 qpair failed and we were unable to recover it. 00:29:27.270 [2024-07-12 11:07:44.015971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.270 [2024-07-12 11:07:44.015998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.270 qpair failed and we were unable to recover it. 00:29:27.270 [2024-07-12 11:07:44.016419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.270 [2024-07-12 11:07:44.016448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.270 qpair failed and we were unable to recover it. 00:29:27.270 [2024-07-12 11:07:44.016874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.270 [2024-07-12 11:07:44.016902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.270 qpair failed and we were unable to recover it. 00:29:27.270 [2024-07-12 11:07:44.017339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.270 [2024-07-12 11:07:44.017368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.270 qpair failed and we were unable to recover it. 00:29:27.270 [2024-07-12 11:07:44.017782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.270 [2024-07-12 11:07:44.017811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.270 qpair failed and we were unable to recover it. 00:29:27.270 [2024-07-12 11:07:44.018234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.270 [2024-07-12 11:07:44.018263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.270 qpair failed and we were unable to recover it. 00:29:27.270 [2024-07-12 11:07:44.018616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.270 [2024-07-12 11:07:44.018645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.270 qpair failed and we were unable to recover it. 00:29:27.270 [2024-07-12 11:07:44.018958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.270 [2024-07-12 11:07:44.018986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.270 qpair failed and we were unable to recover it. 00:29:27.270 [2024-07-12 11:07:44.019400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.270 [2024-07-12 11:07:44.019430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.270 qpair failed and we were unable to recover it. 00:29:27.270 [2024-07-12 11:07:44.019857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.270 [2024-07-12 11:07:44.019885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.270 qpair failed and we were unable to recover it. 00:29:27.270 [2024-07-12 11:07:44.020193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.270 [2024-07-12 11:07:44.020226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.270 qpair failed and we were unable to recover it. 00:29:27.270 [2024-07-12 11:07:44.020692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.270 [2024-07-12 11:07:44.020720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.270 qpair failed and we were unable to recover it. 00:29:27.270 [2024-07-12 11:07:44.021142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.270 [2024-07-12 11:07:44.021173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.270 qpair failed and we were unable to recover it. 00:29:27.270 [2024-07-12 11:07:44.021498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.021529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.021944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.021971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.022396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.022426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.022716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.022747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.023107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.023147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.023602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.023630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.023945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.023973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.024408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.024437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.024861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.024889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.025385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.025413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.025726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.025755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.026200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.026230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.026683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.026710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.027121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.027160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.027561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.027589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.028089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.028117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.028558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.028586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.029013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.029048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.029445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.029474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.029906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.029935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.030353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.030383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.030750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.030778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.031192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.031221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.031653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.031681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.032107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.032146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.032610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.032638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.033060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.033087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.033396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.033429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.033832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.033860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.034286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.034315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.034743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.034771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.035191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.035221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.035527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.035559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.035996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.036025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.036437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.036467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.036837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.036875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.037268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.037297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.037707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.037735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.038054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.038085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.038508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.038538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.271 qpair failed and we were unable to recover it. 00:29:27.271 [2024-07-12 11:07:44.038981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.271 [2024-07-12 11:07:44.039009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.039458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.039488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.039766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.039794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.040207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.040236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.040674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.040702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.041141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.041171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.041604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.041632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.042065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.042094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.042546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.042576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.043000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.043029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.043447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.043477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.043913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.043942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.044243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.044272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.044674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.044702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.045133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.045163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.045594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.045623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.046085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.046113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.046585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.046621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.047038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.047067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.047467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.047497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.047971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.047999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.048438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.048467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.048893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.048922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.049359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.049390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.049870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.049897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.050447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.050550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.051063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.051098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.051559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.051589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.052006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.052035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.052463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.052494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.052912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.052940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.053396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.053427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.053697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.053729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.054144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.054174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.054495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.054527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.054928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.054956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.055437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.055467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.055893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.055921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.056245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.056276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.056697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.056727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.272 qpair failed and we were unable to recover it. 00:29:27.272 [2024-07-12 11:07:44.057152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.272 [2024-07-12 11:07:44.057182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.273 [2024-07-12 11:07:44.057652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.273 [2024-07-12 11:07:44.057680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.273 [2024-07-12 11:07:44.058109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.273 [2024-07-12 11:07:44.058150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.273 [2024-07-12 11:07:44.058548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.273 [2024-07-12 11:07:44.058576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.273 [2024-07-12 11:07:44.058998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.273 [2024-07-12 11:07:44.059027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.273 [2024-07-12 11:07:44.059451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.273 [2024-07-12 11:07:44.059480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.273 [2024-07-12 11:07:44.059910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.273 [2024-07-12 11:07:44.059939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.273 [2024-07-12 11:07:44.060505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.273 [2024-07-12 11:07:44.060607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.273 [2024-07-12 11:07:44.061167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.273 [2024-07-12 11:07:44.061205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.273 [2024-07-12 11:07:44.061634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.273 [2024-07-12 11:07:44.061665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.273 [2024-07-12 11:07:44.062085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.273 [2024-07-12 11:07:44.062113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.273 [2024-07-12 11:07:44.062547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.273 [2024-07-12 11:07:44.062576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.273 [2024-07-12 11:07:44.063003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.273 [2024-07-12 11:07:44.063032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.273 [2024-07-12 11:07:44.063380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.273 [2024-07-12 11:07:44.063414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.273 [2024-07-12 11:07:44.063838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.273 [2024-07-12 11:07:44.063870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.273 [2024-07-12 11:07:44.064179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.273 [2024-07-12 11:07:44.064212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.273 [2024-07-12 11:07:44.064585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.273 [2024-07-12 11:07:44.064612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.273 [2024-07-12 11:07:44.065030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.273 [2024-07-12 11:07:44.065070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.273 [2024-07-12 11:07:44.065517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.273 [2024-07-12 11:07:44.065547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.273 [2024-07-12 11:07:44.065985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.273 [2024-07-12 11:07:44.066014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.273 [2024-07-12 11:07:44.066318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.273 [2024-07-12 11:07:44.066347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.273 [2024-07-12 11:07:44.066787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.273 [2024-07-12 11:07:44.066816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.273 [2024-07-12 11:07:44.067246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.273 [2024-07-12 11:07:44.067276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.273 [2024-07-12 11:07:44.067682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.273 [2024-07-12 11:07:44.067710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.273 [2024-07-12 11:07:44.068138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.273 [2024-07-12 11:07:44.068168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.273 [2024-07-12 11:07:44.068569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.273 [2024-07-12 11:07:44.068600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.273 [2024-07-12 11:07:44.069025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.273 [2024-07-12 11:07:44.069053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.273 [2024-07-12 11:07:44.069495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.273 [2024-07-12 11:07:44.069525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.273 [2024-07-12 11:07:44.069945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.273 [2024-07-12 11:07:44.069974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.273 [2024-07-12 11:07:44.070285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.273 [2024-07-12 11:07:44.070322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.273 [2024-07-12 11:07:44.070649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.273 [2024-07-12 11:07:44.070680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.273 [2024-07-12 11:07:44.071139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.273 [2024-07-12 11:07:44.071169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.273 [2024-07-12 11:07:44.071621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.273 [2024-07-12 11:07:44.071650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.273 [2024-07-12 11:07:44.072079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.273 [2024-07-12 11:07:44.072107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.273 [2024-07-12 11:07:44.072582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.273 [2024-07-12 11:07:44.072611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.273 [2024-07-12 11:07:44.073046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.273 [2024-07-12 11:07:44.073074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.273 [2024-07-12 11:07:44.073506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.273 [2024-07-12 11:07:44.073536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.273 [2024-07-12 11:07:44.073840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.273 [2024-07-12 11:07:44.073868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.273 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.074186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.074218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.074650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.074679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.075116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.075156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.075629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.075657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.076055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.076083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.076547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.076576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.076869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.076903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.077364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.077395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.077823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.077852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.078282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.078313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.078661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.078690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.079003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.079032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.079466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.079496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.079900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.079931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.080347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.080377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.080804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.080835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.081263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.081292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.081703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.081731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.082157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.082187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.082604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.082639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.083062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.083091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.083595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.083625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.083925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.083958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.084446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.084476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.084784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.084812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.085227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.085256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.085678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.085706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.086209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.086239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.086652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.086681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.087100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.087139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.087560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.087590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.088005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.088034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.088470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.088499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.088816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.088844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.089258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.089287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.089700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.089730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.090146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.090176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.090500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.090527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.090933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.274 [2024-07-12 11:07:44.090961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.274 qpair failed and we were unable to recover it. 00:29:27.274 [2024-07-12 11:07:44.091393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.275 [2024-07-12 11:07:44.091422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.275 qpair failed and we were unable to recover it. 00:29:27.275 [2024-07-12 11:07:44.091850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.275 [2024-07-12 11:07:44.091879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.275 qpair failed and we were unable to recover it. 00:29:27.275 [2024-07-12 11:07:44.092311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.275 [2024-07-12 11:07:44.092339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.275 qpair failed and we were unable to recover it. 00:29:27.275 [2024-07-12 11:07:44.092765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.275 [2024-07-12 11:07:44.092794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.275 qpair failed and we were unable to recover it. 00:29:27.275 [2024-07-12 11:07:44.093212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.275 [2024-07-12 11:07:44.093240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.275 qpair failed and we were unable to recover it. 00:29:27.275 [2024-07-12 11:07:44.093665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.275 [2024-07-12 11:07:44.093693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.275 qpair failed and we were unable to recover it. 00:29:27.275 [2024-07-12 11:07:44.094010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.275 [2024-07-12 11:07:44.094039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.275 qpair failed and we were unable to recover it. 00:29:27.275 [2024-07-12 11:07:44.094450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.275 [2024-07-12 11:07:44.094481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.275 qpair failed and we were unable to recover it. 00:29:27.275 [2024-07-12 11:07:44.094904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.275 [2024-07-12 11:07:44.094932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.275 qpair failed and we were unable to recover it. 00:29:27.275 [2024-07-12 11:07:44.095240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.275 [2024-07-12 11:07:44.095273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.275 qpair failed and we were unable to recover it. 00:29:27.275 [2024-07-12 11:07:44.095684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.275 [2024-07-12 11:07:44.095712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.275 qpair failed and we were unable to recover it. 00:29:27.275 [2024-07-12 11:07:44.096150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.275 [2024-07-12 11:07:44.096179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.275 qpair failed and we were unable to recover it. 00:29:27.275 [2024-07-12 11:07:44.096524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.275 [2024-07-12 11:07:44.096551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.275 qpair failed and we were unable to recover it. 00:29:27.275 [2024-07-12 11:07:44.096976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.275 [2024-07-12 11:07:44.097004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.275 qpair failed and we were unable to recover it. 00:29:27.275 [2024-07-12 11:07:44.097441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.275 [2024-07-12 11:07:44.097470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.275 qpair failed and we were unable to recover it. 00:29:27.275 [2024-07-12 11:07:44.097892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.275 [2024-07-12 11:07:44.097920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.275 qpair failed and we were unable to recover it. 00:29:27.275 [2024-07-12 11:07:44.098348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.275 [2024-07-12 11:07:44.098378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.275 qpair failed and we were unable to recover it. 00:29:27.275 [2024-07-12 11:07:44.098682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.275 [2024-07-12 11:07:44.098708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.275 qpair failed and we were unable to recover it. 00:29:27.275 [2024-07-12 11:07:44.099144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.275 [2024-07-12 11:07:44.099173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.275 qpair failed and we were unable to recover it. 00:29:27.275 [2024-07-12 11:07:44.099583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.275 [2024-07-12 11:07:44.099611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.275 qpair failed and we were unable to recover it. 00:29:27.275 [2024-07-12 11:07:44.099903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.275 [2024-07-12 11:07:44.099939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.275 qpair failed and we were unable to recover it. 00:29:27.275 [2024-07-12 11:07:44.100408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.275 [2024-07-12 11:07:44.100438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.275 qpair failed and we were unable to recover it. 00:29:27.275 [2024-07-12 11:07:44.100863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.275 [2024-07-12 11:07:44.100890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.275 qpair failed and we were unable to recover it. 00:29:27.275 [2024-07-12 11:07:44.101326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.275 [2024-07-12 11:07:44.101355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.275 qpair failed and we were unable to recover it. 00:29:27.275 [2024-07-12 11:07:44.101785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.275 [2024-07-12 11:07:44.101814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.275 qpair failed and we were unable to recover it. 00:29:27.275 [2024-07-12 11:07:44.102235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.275 [2024-07-12 11:07:44.102265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.275 qpair failed and we were unable to recover it. 00:29:27.275 [2024-07-12 11:07:44.102680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.275 [2024-07-12 11:07:44.102708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.275 qpair failed and we were unable to recover it. 00:29:27.275 [2024-07-12 11:07:44.103177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.275 [2024-07-12 11:07:44.103206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.275 qpair failed and we were unable to recover it. 00:29:27.275 [2024-07-12 11:07:44.103636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.275 [2024-07-12 11:07:44.103664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.275 qpair failed and we were unable to recover it. 00:29:27.275 [2024-07-12 11:07:44.104011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.275 [2024-07-12 11:07:44.104039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.275 qpair failed and we were unable to recover it. 00:29:27.275 [2024-07-12 11:07:44.104442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.275 [2024-07-12 11:07:44.104471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.275 qpair failed and we were unable to recover it. 00:29:27.275 [2024-07-12 11:07:44.104884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.275 [2024-07-12 11:07:44.104913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.275 qpair failed and we were unable to recover it. 00:29:27.275 [2024-07-12 11:07:44.105224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.275 [2024-07-12 11:07:44.105253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.275 qpair failed and we were unable to recover it. 00:29:27.275 [2024-07-12 11:07:44.105698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.275 [2024-07-12 11:07:44.105727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.275 qpair failed and we were unable to recover it. 00:29:27.275 [2024-07-12 11:07:44.106161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.106191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.106598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.106627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.107059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.107087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.107585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.107616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.108041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.108069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.108571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.108601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.109029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.109056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.109396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.109426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.109844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.109872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.110300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.110328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.110758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.110786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.111208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.111237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.111671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.111699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.112136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.112166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.112584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.112612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.113036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.113064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.113489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.113519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.113946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.113974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.114473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.114502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.114940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.114968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.115382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.115411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.115786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.115814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.116208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.116237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.116680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.116708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.117063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.117091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.117551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.117580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.118010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.118044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.118439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.118470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.118887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.118915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.119338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.119367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.119796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.119825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.120275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.120303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.120733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.120762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.121197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.121225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.121643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.121671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.122095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.122135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.122551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.122579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.123024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.123053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.123467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.123498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.276 qpair failed and we were unable to recover it. 00:29:27.276 [2024-07-12 11:07:44.123915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.276 [2024-07-12 11:07:44.123943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.124235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.124267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.124703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.124732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.125153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.125183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.125621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.125649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.126042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.126070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.126496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.126526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.126958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.126986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.127419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.127449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.127878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.127906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.128224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.128253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.128644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.128671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.129088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.129117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.129548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.129576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.130078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.130107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.130508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.130536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.130926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.130956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.131349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.131379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.131806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.131833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.132151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.132179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.132606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.132634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.133029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.133057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.133455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.133484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.133911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.133939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.134308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.134337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.134748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.134775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.135205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.135235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.135689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.135717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.136149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.136179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.136609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.136638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.137099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.137148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.137609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.137638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.138066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.138096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.138591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.138620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.138931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.138959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.139373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.139403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.139824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.139852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.140215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.140243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.140680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.140708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.141071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.141099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.277 qpair failed and we were unable to recover it. 00:29:27.277 [2024-07-12 11:07:44.141534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.277 [2024-07-12 11:07:44.141562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.141897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.141930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.142361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.142390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.142824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.142852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.143282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.143312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.143745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.143775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.144181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.144210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.144634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.144662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.145070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.145099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.145523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.145551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.145901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.145930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.146364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.146393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.146824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.146852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.147280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.147309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.147623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.147659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.148053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.148081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.148548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.148577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.149042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.149069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.149544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.149574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.150003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.150030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.150470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.150500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.150922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.150951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.151373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.151403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.151745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.151774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.152058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.152088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.152329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.152358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.152818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.152846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.153290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.153320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.153741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.153773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.154187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.154216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.154659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.154688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.155102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.155144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.155488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.155517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.155878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.155907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.156341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.156371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.156800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.156829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.157254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.157284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.157695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.157723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.158142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.158172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.158637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.158664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.278 qpair failed and we were unable to recover it. 00:29:27.278 [2024-07-12 11:07:44.159089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.278 [2024-07-12 11:07:44.159117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.279 qpair failed and we were unable to recover it. 00:29:27.279 [2024-07-12 11:07:44.159559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.279 [2024-07-12 11:07:44.159589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.279 qpair failed and we were unable to recover it. 00:29:27.279 [2024-07-12 11:07:44.160003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.279 [2024-07-12 11:07:44.160032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.279 qpair failed and we were unable to recover it. 00:29:27.279 [2024-07-12 11:07:44.160438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.279 [2024-07-12 11:07:44.160469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.279 qpair failed and we were unable to recover it. 00:29:27.279 [2024-07-12 11:07:44.160889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.279 [2024-07-12 11:07:44.160918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.279 qpair failed and we were unable to recover it. 00:29:27.279 [2024-07-12 11:07:44.161356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.279 [2024-07-12 11:07:44.161389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.279 qpair failed and we were unable to recover it. 00:29:27.279 [2024-07-12 11:07:44.161802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.279 [2024-07-12 11:07:44.161831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.279 qpair failed and we were unable to recover it. 00:29:27.279 [2024-07-12 11:07:44.162262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.279 [2024-07-12 11:07:44.162293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.279 qpair failed and we were unable to recover it. 00:29:27.279 [2024-07-12 11:07:44.162701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.279 [2024-07-12 11:07:44.162729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.279 qpair failed and we were unable to recover it. 00:29:27.279 [2024-07-12 11:07:44.163145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.279 [2024-07-12 11:07:44.163175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.279 qpair failed and we were unable to recover it. 00:29:27.279 [2024-07-12 11:07:44.163596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.279 [2024-07-12 11:07:44.163625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.279 qpair failed and we were unable to recover it. 00:29:27.279 [2024-07-12 11:07:44.164034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.279 [2024-07-12 11:07:44.164063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.279 qpair failed and we were unable to recover it. 00:29:27.279 [2024-07-12 11:07:44.164469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.279 [2024-07-12 11:07:44.164499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.279 qpair failed and we were unable to recover it. 00:29:27.279 [2024-07-12 11:07:44.164911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.279 [2024-07-12 11:07:44.164939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.279 qpair failed and we were unable to recover it. 00:29:27.279 [2024-07-12 11:07:44.165464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.279 [2024-07-12 11:07:44.165580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.279 qpair failed and we were unable to recover it. 00:29:27.279 [2024-07-12 11:07:44.166096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.279 [2024-07-12 11:07:44.166152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.279 qpair failed and we were unable to recover it. 00:29:27.279 [2024-07-12 11:07:44.166589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.279 [2024-07-12 11:07:44.166618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.279 qpair failed and we were unable to recover it. 00:29:27.279 [2024-07-12 11:07:44.167047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.279 [2024-07-12 11:07:44.167075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.279 qpair failed and we were unable to recover it. 00:29:27.279 [2024-07-12 11:07:44.167566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.279 [2024-07-12 11:07:44.167596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.279 qpair failed and we were unable to recover it. 00:29:27.279 [2024-07-12 11:07:44.168058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.279 [2024-07-12 11:07:44.168089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.279 qpair failed and we were unable to recover it. 00:29:27.279 [2024-07-12 11:07:44.168457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.279 [2024-07-12 11:07:44.168497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.279 qpair failed and we were unable to recover it. 00:29:27.279 [2024-07-12 11:07:44.168982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.279 [2024-07-12 11:07:44.169010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.279 qpair failed and we were unable to recover it. 00:29:27.279 [2024-07-12 11:07:44.169406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.279 [2024-07-12 11:07:44.169434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.279 qpair failed and we were unable to recover it. 00:29:27.279 [2024-07-12 11:07:44.169848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.279 [2024-07-12 11:07:44.169877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.279 qpair failed and we were unable to recover it. 00:29:27.279 [2024-07-12 11:07:44.170302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.279 [2024-07-12 11:07:44.170331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.279 qpair failed and we were unable to recover it. 00:29:27.279 [2024-07-12 11:07:44.170750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.279 [2024-07-12 11:07:44.170780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.279 qpair failed and we were unable to recover it. 00:29:27.279 [2024-07-12 11:07:44.171186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.279 [2024-07-12 11:07:44.171215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.279 qpair failed and we were unable to recover it. 00:29:27.279 [2024-07-12 11:07:44.171697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.279 [2024-07-12 11:07:44.171727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.279 qpair failed and we were unable to recover it. 00:29:27.279 [2024-07-12 11:07:44.171983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.279 [2024-07-12 11:07:44.172014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.279 qpair failed and we were unable to recover it. 00:29:27.279 [2024-07-12 11:07:44.172447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.279 [2024-07-12 11:07:44.172476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.279 qpair failed and we were unable to recover it. 00:29:27.279 [2024-07-12 11:07:44.172920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.279 [2024-07-12 11:07:44.172948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.279 qpair failed and we were unable to recover it. 00:29:27.279 [2024-07-12 11:07:44.173411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.279 [2024-07-12 11:07:44.173441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.279 qpair failed and we were unable to recover it. 00:29:27.279 [2024-07-12 11:07:44.173916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.279 [2024-07-12 11:07:44.173945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.279 qpair failed and we were unable to recover it. 00:29:27.279 [2024-07-12 11:07:44.174358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.174387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.174795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.174825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.175254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.175283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.175605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.175633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.176050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.176078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.176511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.176540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.176889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.176918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.177354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.177384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.177657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.177685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.178144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.178173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.178614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.178644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.179071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.179099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.179423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.179451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.179886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.179914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.180229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.180258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.180697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.180725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.181158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.181187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.181687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.181715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.182141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.182170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.182579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.182607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.183041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.183070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.183506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.183542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.183806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.183834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.184276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.184305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.184587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.184613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.185027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.185055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.185545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.185574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.185882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.185909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.186214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.186242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.186677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.186706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.187133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.187162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.187459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.187486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.187916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.187944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.188376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.188404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.188832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.188860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.189368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.189397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.189803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.189831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.190260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.190288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.190604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.190631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.191050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.191077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.280 [2024-07-12 11:07:44.191551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.280 [2024-07-12 11:07:44.191579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.280 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.192004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.192030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.192464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.192492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.193012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.193038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.193473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.193501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.193911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.193939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.194358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.194387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.194807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.194835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.195270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.195300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.195769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.195798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.196088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.196116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.196605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.196634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.197057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.197086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.197513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.197543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.197970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.197998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.198336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.198365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.198794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.198822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.199164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.199195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.199491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.199521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.199844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.199873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.200317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.200347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.200768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.200803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.201223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.201253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.201563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.201592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.201891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.201922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.202354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.202385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.202801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.202830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.203287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.203317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.203739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.203768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.204195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.204224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.204650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.204679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.205095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.205135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.205528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.205557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.205982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.206010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.206460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.206491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.206914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.206944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.207404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.207434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.207851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.207881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.208278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.208308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.208716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.208745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.209222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.281 [2024-07-12 11:07:44.209252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.281 qpair failed and we were unable to recover it. 00:29:27.281 [2024-07-12 11:07:44.209555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.209584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.210045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.210074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.210522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.210553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.210968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.210996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.211290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.211319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.211634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.211662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.211982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.212010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.212416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.212445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.212883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.212911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.213330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.213359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.213805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.213834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.214203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.214231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.214651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.214679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.215099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.215139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.215551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.215579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.216052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.216080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.216526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.216555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.216989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.217017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.217421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.217450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.217907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.217936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.218347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.218382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.218788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.218817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.219243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.219272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.219590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.219619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.220058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.220086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.220570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.220599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.220997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.221025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.221500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.221528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.221959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.221987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.222468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.222496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.222905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.222932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.223383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.223412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.223832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.223860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.224270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.224299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.224742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.224771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.225201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.225230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.225659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.225687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.226108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.226145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.226569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.226597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.282 [2024-07-12 11:07:44.227004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.282 [2024-07-12 11:07:44.227032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.282 qpair failed and we were unable to recover it. 00:29:27.283 [2024-07-12 11:07:44.227528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.283 [2024-07-12 11:07:44.227557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.283 qpair failed and we were unable to recover it. 00:29:27.283 [2024-07-12 11:07:44.227946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.283 [2024-07-12 11:07:44.227973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.283 qpair failed and we were unable to recover it. 00:29:27.283 [2024-07-12 11:07:44.228375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.283 [2024-07-12 11:07:44.228404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.283 qpair failed and we were unable to recover it. 00:29:27.283 [2024-07-12 11:07:44.228822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.283 [2024-07-12 11:07:44.228852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.283 qpair failed and we were unable to recover it. 00:29:27.283 [2024-07-12 11:07:44.229230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.283 [2024-07-12 11:07:44.229261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.283 qpair failed and we were unable to recover it. 00:29:27.283 [2024-07-12 11:07:44.229673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.283 [2024-07-12 11:07:44.229702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.283 qpair failed and we were unable to recover it. 00:29:27.283 [2024-07-12 11:07:44.230051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.283 [2024-07-12 11:07:44.230079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.283 qpair failed and we were unable to recover it. 00:29:27.283 [2024-07-12 11:07:44.230560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.283 [2024-07-12 11:07:44.230589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.283 qpair failed and we were unable to recover it. 00:29:27.283 [2024-07-12 11:07:44.231013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.283 [2024-07-12 11:07:44.231040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.283 qpair failed and we were unable to recover it. 00:29:27.283 [2024-07-12 11:07:44.231440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.283 [2024-07-12 11:07:44.231470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.283 qpair failed and we were unable to recover it. 00:29:27.283 [2024-07-12 11:07:44.231785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.283 [2024-07-12 11:07:44.231812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.283 qpair failed and we were unable to recover it. 00:29:27.283 [2024-07-12 11:07:44.232250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.283 [2024-07-12 11:07:44.232279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.283 qpair failed and we were unable to recover it. 00:29:27.283 [2024-07-12 11:07:44.232714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.283 [2024-07-12 11:07:44.232743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.283 qpair failed and we were unable to recover it. 00:29:27.283 [2024-07-12 11:07:44.233169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.283 [2024-07-12 11:07:44.233198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.283 qpair failed and we were unable to recover it. 00:29:27.283 [2024-07-12 11:07:44.233633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.283 [2024-07-12 11:07:44.233661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.283 qpair failed and we were unable to recover it. 00:29:27.283 [2024-07-12 11:07:44.234090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.283 [2024-07-12 11:07:44.234118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.283 qpair failed and we were unable to recover it. 00:29:27.283 [2024-07-12 11:07:44.234524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.283 [2024-07-12 11:07:44.234553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.283 qpair failed and we were unable to recover it. 00:29:27.283 [2024-07-12 11:07:44.234976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.283 [2024-07-12 11:07:44.235005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.283 qpair failed and we were unable to recover it. 00:29:27.283 [2024-07-12 11:07:44.235418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.283 [2024-07-12 11:07:44.235446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.283 qpair failed and we were unable to recover it. 00:29:27.283 [2024-07-12 11:07:44.235892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.283 [2024-07-12 11:07:44.235919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.283 qpair failed and we were unable to recover it. 00:29:27.283 [2024-07-12 11:07:44.236414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.283 [2024-07-12 11:07:44.236449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.283 qpair failed and we were unable to recover it. 00:29:27.283 [2024-07-12 11:07:44.236854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.283 [2024-07-12 11:07:44.236882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.283 qpair failed and we were unable to recover it. 00:29:27.283 [2024-07-12 11:07:44.237304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.283 [2024-07-12 11:07:44.237333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.283 qpair failed and we were unable to recover it. 00:29:27.283 [2024-07-12 11:07:44.237755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.283 [2024-07-12 11:07:44.237783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.283 qpair failed and we were unable to recover it. 00:29:27.283 [2024-07-12 11:07:44.238208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.283 [2024-07-12 11:07:44.238237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.283 qpair failed and we were unable to recover it. 00:29:27.283 [2024-07-12 11:07:44.238662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.283 [2024-07-12 11:07:44.238689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.283 qpair failed and we were unable to recover it. 00:29:27.283 [2024-07-12 11:07:44.239143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.283 [2024-07-12 11:07:44.239173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.283 qpair failed and we were unable to recover it. 00:29:27.283 [2024-07-12 11:07:44.239587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.283 [2024-07-12 11:07:44.239615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.283 qpair failed and we were unable to recover it. 00:29:27.557 [2024-07-12 11:07:44.240041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.557 [2024-07-12 11:07:44.240072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.557 qpair failed and we were unable to recover it. 00:29:27.557 [2024-07-12 11:07:44.240455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.557 [2024-07-12 11:07:44.240485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.557 qpair failed and we were unable to recover it. 00:29:27.557 [2024-07-12 11:07:44.240899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.557 [2024-07-12 11:07:44.240927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.557 qpair failed and we were unable to recover it. 00:29:27.557 [2024-07-12 11:07:44.241362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.557 [2024-07-12 11:07:44.241392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.557 qpair failed and we were unable to recover it. 00:29:27.557 [2024-07-12 11:07:44.241819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.557 [2024-07-12 11:07:44.241846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.557 qpair failed and we were unable to recover it. 00:29:27.557 [2024-07-12 11:07:44.242269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.557 [2024-07-12 11:07:44.242300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.557 qpair failed and we were unable to recover it. 00:29:27.557 [2024-07-12 11:07:44.242754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.557 [2024-07-12 11:07:44.242783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.557 qpair failed and we were unable to recover it. 00:29:27.557 [2024-07-12 11:07:44.243207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.557 [2024-07-12 11:07:44.243236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.557 qpair failed and we were unable to recover it. 00:29:27.557 [2024-07-12 11:07:44.243670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.557 [2024-07-12 11:07:44.243698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.557 qpair failed and we were unable to recover it. 00:29:27.557 [2024-07-12 11:07:44.244112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.557 [2024-07-12 11:07:44.244150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.557 qpair failed and we were unable to recover it. 00:29:27.557 [2024-07-12 11:07:44.244580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.557 [2024-07-12 11:07:44.244609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.557 qpair failed and we were unable to recover it. 00:29:27.557 [2024-07-12 11:07:44.245008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.557 [2024-07-12 11:07:44.245036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.557 qpair failed and we were unable to recover it. 00:29:27.557 [2024-07-12 11:07:44.245483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.557 [2024-07-12 11:07:44.245512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.557 qpair failed and we were unable to recover it. 00:29:27.557 [2024-07-12 11:07:44.246018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.557 [2024-07-12 11:07:44.246046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.557 qpair failed and we were unable to recover it. 00:29:27.557 [2024-07-12 11:07:44.246439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.557 [2024-07-12 11:07:44.246468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.557 qpair failed and we were unable to recover it. 00:29:27.557 [2024-07-12 11:07:44.246894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.557 [2024-07-12 11:07:44.246930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.557 qpair failed and we were unable to recover it. 00:29:27.557 [2024-07-12 11:07:44.247342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.557 [2024-07-12 11:07:44.247372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.557 qpair failed and we were unable to recover it. 00:29:27.557 [2024-07-12 11:07:44.247802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.557 [2024-07-12 11:07:44.247829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.557 qpair failed and we were unable to recover it. 00:29:27.557 [2024-07-12 11:07:44.248254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.557 [2024-07-12 11:07:44.248283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.557 qpair failed and we were unable to recover it. 00:29:27.557 [2024-07-12 11:07:44.248711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.557 [2024-07-12 11:07:44.248740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.557 qpair failed and we were unable to recover it. 00:29:27.557 [2024-07-12 11:07:44.249167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.557 [2024-07-12 11:07:44.249197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.557 qpair failed and we were unable to recover it. 00:29:27.557 [2024-07-12 11:07:44.249616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.249644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.250071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.250098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.250502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.250530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.250833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.250862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.251292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.251321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.251745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.251772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.252186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.252215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.252636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.252663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.253097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.253136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.253566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.253594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.254020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.254048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.254452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.254487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.254803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.254835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.255072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.255100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.255537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.255566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.256001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.256030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.256450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.256479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.256899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.256927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.257218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.257248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.257657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.257685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.258113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.258154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.258639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.258667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.259090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.259118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.259554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.259583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.260011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.260038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.260452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.260483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.260792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.260821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.261245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.261274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.261710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.261739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.262170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.262199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.262612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.262639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.262998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.263025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.263450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.263478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.263924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.263952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.264377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.264406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.264815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.264843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.265310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.265339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.265767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.265794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.266227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.558 [2024-07-12 11:07:44.266256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.558 qpair failed and we were unable to recover it. 00:29:27.558 [2024-07-12 11:07:44.266704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.266732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.267151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.267180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.267586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.267616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.268038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.268066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.268511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.268541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.268967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.268996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.269429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.269459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.269892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.269920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.270382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.270411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.270846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.270873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.271299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.271328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.271757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.271785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.272216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.272251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.272666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.272694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.273119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.273161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.273574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.273602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.273910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.273938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.274369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.274399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.274817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.274844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.275191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.275221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.275611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.275638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.276102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.276141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.276490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.276518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.276943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.276970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.277464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.277493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.277803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.277831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.278272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.278302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.278723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.278751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.279227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.279256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.279677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.279705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.280121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.280159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.280594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.280622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.281043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.281070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.281488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.281517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.282028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.282055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.282390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.282420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.282838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.282867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.283285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.283314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.283744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.283771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.284201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.284231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.559 qpair failed and we were unable to recover it. 00:29:27.559 [2024-07-12 11:07:44.284547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.559 [2024-07-12 11:07:44.284575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.560 qpair failed and we were unable to recover it. 00:29:27.560 [2024-07-12 11:07:44.284867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.560 [2024-07-12 11:07:44.284897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.560 qpair failed and we were unable to recover it. 00:29:27.560 [2024-07-12 11:07:44.285326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.560 [2024-07-12 11:07:44.285355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.560 qpair failed and we were unable to recover it. 00:29:27.560 [2024-07-12 11:07:44.285776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.560 [2024-07-12 11:07:44.285805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.560 qpair failed and we were unable to recover it. 00:29:27.560 [2024-07-12 11:07:44.286235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.560 [2024-07-12 11:07:44.286265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.560 qpair failed and we were unable to recover it. 00:29:27.560 [2024-07-12 11:07:44.286693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.560 [2024-07-12 11:07:44.286723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.560 qpair failed and we were unable to recover it. 00:29:27.560 [2024-07-12 11:07:44.287144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.560 [2024-07-12 11:07:44.287174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.560 qpair failed and we were unable to recover it. 00:29:27.560 [2024-07-12 11:07:44.287529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.560 [2024-07-12 11:07:44.287557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.560 qpair failed and we were unable to recover it. 00:29:27.560 [2024-07-12 11:07:44.287990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.560 [2024-07-12 11:07:44.288018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.560 qpair failed and we were unable to recover it. 00:29:27.560 [2024-07-12 11:07:44.288422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.560 [2024-07-12 11:07:44.288451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.560 qpair failed and we were unable to recover it. 00:29:27.560 [2024-07-12 11:07:44.288875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.560 [2024-07-12 11:07:44.288902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.560 qpair failed and we were unable to recover it. 00:29:27.560 [2024-07-12 11:07:44.289329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.560 [2024-07-12 11:07:44.289358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.560 qpair failed and we were unable to recover it. 00:29:27.560 [2024-07-12 11:07:44.289775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.560 [2024-07-12 11:07:44.289810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.560 qpair failed and we were unable to recover it. 00:29:27.560 [2024-07-12 11:07:44.290245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.560 [2024-07-12 11:07:44.290275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.560 qpair failed and we were unable to recover it. 00:29:27.560 [2024-07-12 11:07:44.290744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.560 [2024-07-12 11:07:44.290772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.560 qpair failed and we were unable to recover it. 00:29:27.560 [2024-07-12 11:07:44.291188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.560 [2024-07-12 11:07:44.291218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.560 qpair failed and we were unable to recover it. 00:29:27.560 [2024-07-12 11:07:44.291646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.560 [2024-07-12 11:07:44.291674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.560 qpair failed and we were unable to recover it. 00:29:27.560 [2024-07-12 11:07:44.292077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.560 [2024-07-12 11:07:44.292105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.560 qpair failed and we were unable to recover it. 00:29:27.560 [2024-07-12 11:07:44.292534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.560 [2024-07-12 11:07:44.292562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.560 qpair failed and we were unable to recover it. 00:29:27.560 [2024-07-12 11:07:44.292992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.560 [2024-07-12 11:07:44.293020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.560 qpair failed and we were unable to recover it. 00:29:27.560 [2024-07-12 11:07:44.293430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.560 [2024-07-12 11:07:44.293459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.560 qpair failed and we were unable to recover it. 00:29:27.560 [2024-07-12 11:07:44.293923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.560 [2024-07-12 11:07:44.293951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.560 qpair failed and we were unable to recover it. 00:29:27.560 [2024-07-12 11:07:44.294355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.560 [2024-07-12 11:07:44.294385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.560 qpair failed and we were unable to recover it. 00:29:27.560 [2024-07-12 11:07:44.294805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.560 [2024-07-12 11:07:44.294834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.560 qpair failed and we were unable to recover it. 00:29:27.560 [2024-07-12 11:07:44.295260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.560 [2024-07-12 11:07:44.295291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.560 qpair failed and we were unable to recover it. 00:29:27.560 [2024-07-12 11:07:44.295722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.560 [2024-07-12 11:07:44.295750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.560 qpair failed and we were unable to recover it. 00:29:27.560 [2024-07-12 11:07:44.296177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.560 [2024-07-12 11:07:44.296206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.560 qpair failed and we were unable to recover it. 00:29:27.560 [2024-07-12 11:07:44.296608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.560 [2024-07-12 11:07:44.296636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.560 qpair failed and we were unable to recover it. 00:29:27.560 [2024-07-12 11:07:44.296937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.560 [2024-07-12 11:07:44.296968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.560 qpair failed and we were unable to recover it. 00:29:27.560 [2024-07-12 11:07:44.297361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.560 [2024-07-12 11:07:44.297390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.560 qpair failed and we were unable to recover it. 00:29:27.560 [2024-07-12 11:07:44.297818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.560 [2024-07-12 11:07:44.297845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.560 qpair failed and we were unable to recover it. 00:29:27.560 [2024-07-12 11:07:44.298170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.560 [2024-07-12 11:07:44.298201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.560 qpair failed and we were unable to recover it. 00:29:27.560 [2024-07-12 11:07:44.298615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.560 [2024-07-12 11:07:44.298643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.560 qpair failed and we were unable to recover it. 00:29:27.560 [2024-07-12 11:07:44.299072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.560 [2024-07-12 11:07:44.299100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.560 qpair failed and we were unable to recover it. 00:29:27.560 [2024-07-12 11:07:44.299518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.560 [2024-07-12 11:07:44.299546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.560 qpair failed and we were unable to recover it. 00:29:27.560 [2024-07-12 11:07:44.299967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.560 [2024-07-12 11:07:44.299994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.560 qpair failed and we were unable to recover it. 00:29:27.560 [2024-07-12 11:07:44.300414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.560 [2024-07-12 11:07:44.300442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.560 qpair failed and we were unable to recover it. 00:29:27.560 [2024-07-12 11:07:44.300870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.560 [2024-07-12 11:07:44.300898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.561 qpair failed and we were unable to recover it. 00:29:27.561 [2024-07-12 11:07:44.301113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.561 [2024-07-12 11:07:44.301153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.561 qpair failed and we were unable to recover it. 00:29:27.561 [2024-07-12 11:07:44.301578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.561 [2024-07-12 11:07:44.301607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.561 qpair failed and we were unable to recover it. 00:29:27.561 [2024-07-12 11:07:44.302034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.561 [2024-07-12 11:07:44.302061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.561 qpair failed and we were unable to recover it. 00:29:27.561 [2024-07-12 11:07:44.302483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.561 [2024-07-12 11:07:44.302513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.561 qpair failed and we were unable to recover it. 00:29:27.561 [2024-07-12 11:07:44.302930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.561 [2024-07-12 11:07:44.302959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.561 qpair failed and we were unable to recover it. 00:29:27.561 [2024-07-12 11:07:44.303402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.561 [2024-07-12 11:07:44.303431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.561 qpair failed and we were unable to recover it. 00:29:27.561 [2024-07-12 11:07:44.303872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.561 [2024-07-12 11:07:44.303900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.561 qpair failed and we were unable to recover it. 00:29:27.561 [2024-07-12 11:07:44.304334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.561 [2024-07-12 11:07:44.304363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.561 qpair failed and we were unable to recover it. 00:29:27.561 [2024-07-12 11:07:44.304777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.561 [2024-07-12 11:07:44.304805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.561 qpair failed and we were unable to recover it. 00:29:27.561 [2024-07-12 11:07:44.305238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.561 [2024-07-12 11:07:44.305266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.561 qpair failed and we were unable to recover it. 00:29:27.561 [2024-07-12 11:07:44.305675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.561 [2024-07-12 11:07:44.305703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.561 qpair failed and we were unable to recover it. 00:29:27.561 [2024-07-12 11:07:44.306117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.561 [2024-07-12 11:07:44.306159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.561 qpair failed and we were unable to recover it. 00:29:27.561 [2024-07-12 11:07:44.306606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.561 [2024-07-12 11:07:44.306634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.561 qpair failed and we were unable to recover it. 00:29:27.561 [2024-07-12 11:07:44.307063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.561 [2024-07-12 11:07:44.307092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.561 qpair failed and we were unable to recover it. 00:29:27.561 [2024-07-12 11:07:44.307532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.561 [2024-07-12 11:07:44.307568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.561 qpair failed and we were unable to recover it. 00:29:27.561 [2024-07-12 11:07:44.307989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.561 [2024-07-12 11:07:44.308018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.561 qpair failed and we were unable to recover it. 00:29:27.561 [2024-07-12 11:07:44.308442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.561 [2024-07-12 11:07:44.308472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.561 qpair failed and we were unable to recover it. 00:29:27.561 [2024-07-12 11:07:44.308905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.561 [2024-07-12 11:07:44.308932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.561 qpair failed and we were unable to recover it. 00:29:27.561 [2024-07-12 11:07:44.309294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.561 [2024-07-12 11:07:44.309324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.561 qpair failed and we were unable to recover it. 00:29:27.561 [2024-07-12 11:07:44.309638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.561 [2024-07-12 11:07:44.309666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.561 qpair failed and we were unable to recover it. 00:29:27.561 [2024-07-12 11:07:44.310163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.561 [2024-07-12 11:07:44.310192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.561 qpair failed and we were unable to recover it. 00:29:27.561 [2024-07-12 11:07:44.310628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.561 [2024-07-12 11:07:44.310655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.561 qpair failed and we were unable to recover it. 00:29:27.561 [2024-07-12 11:07:44.310934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.561 [2024-07-12 11:07:44.310964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.561 qpair failed and we were unable to recover it. 00:29:27.561 [2024-07-12 11:07:44.311391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.561 [2024-07-12 11:07:44.311420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.561 qpair failed and we were unable to recover it. 00:29:27.561 [2024-07-12 11:07:44.311845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.561 [2024-07-12 11:07:44.311872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.561 qpair failed and we were unable to recover it. 00:29:27.561 [2024-07-12 11:07:44.312302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.561 [2024-07-12 11:07:44.312332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.561 qpair failed and we were unable to recover it. 00:29:27.561 [2024-07-12 11:07:44.312707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.561 [2024-07-12 11:07:44.312743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.561 qpair failed and we were unable to recover it. 00:29:27.561 [2024-07-12 11:07:44.313145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.561 [2024-07-12 11:07:44.313174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.561 qpair failed and we were unable to recover it. 00:29:27.561 [2024-07-12 11:07:44.313653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.561 [2024-07-12 11:07:44.313682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.561 qpair failed and we were unable to recover it. 00:29:27.561 [2024-07-12 11:07:44.314111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.561 [2024-07-12 11:07:44.314167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.561 qpair failed and we were unable to recover it. 00:29:27.561 [2024-07-12 11:07:44.314629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.561 [2024-07-12 11:07:44.314658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.561 qpair failed and we were unable to recover it. 00:29:27.562 [2024-07-12 11:07:44.315070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.562 [2024-07-12 11:07:44.315098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.562 qpair failed and we were unable to recover it. 00:29:27.562 [2024-07-12 11:07:44.315529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.562 [2024-07-12 11:07:44.315558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.562 qpair failed and we were unable to recover it. 00:29:27.562 [2024-07-12 11:07:44.315984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.562 [2024-07-12 11:07:44.316011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.562 qpair failed and we were unable to recover it. 00:29:27.562 [2024-07-12 11:07:44.316412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.562 [2024-07-12 11:07:44.316441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.562 qpair failed and we were unable to recover it. 00:29:27.562 [2024-07-12 11:07:44.316865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.562 [2024-07-12 11:07:44.316893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.562 qpair failed and we were unable to recover it. 00:29:27.562 [2024-07-12 11:07:44.317316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.562 [2024-07-12 11:07:44.317345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.562 qpair failed and we were unable to recover it. 00:29:27.562 [2024-07-12 11:07:44.317764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.562 [2024-07-12 11:07:44.317793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.562 qpair failed and we were unable to recover it. 00:29:27.562 [2024-07-12 11:07:44.318089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.562 [2024-07-12 11:07:44.318121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.562 qpair failed and we were unable to recover it. 00:29:27.562 [2024-07-12 11:07:44.318560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.562 [2024-07-12 11:07:44.318588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.562 qpair failed and we were unable to recover it. 00:29:27.562 [2024-07-12 11:07:44.319012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.562 [2024-07-12 11:07:44.319041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.562 qpair failed and we were unable to recover it. 00:29:27.562 [2024-07-12 11:07:44.319469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.562 [2024-07-12 11:07:44.319499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.562 qpair failed and we were unable to recover it. 00:29:27.562 [2024-07-12 11:07:44.319927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.562 [2024-07-12 11:07:44.319955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.562 qpair failed and we were unable to recover it. 00:29:27.562 [2024-07-12 11:07:44.320393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.562 [2024-07-12 11:07:44.320423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.562 qpair failed and we were unable to recover it. 00:29:27.562 [2024-07-12 11:07:44.320854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.562 [2024-07-12 11:07:44.320881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.562 qpair failed and we were unable to recover it. 00:29:27.562 [2024-07-12 11:07:44.321459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.562 [2024-07-12 11:07:44.321561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.562 qpair failed and we were unable to recover it. 00:29:27.562 [2024-07-12 11:07:44.322076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.562 [2024-07-12 11:07:44.322111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.562 qpair failed and we were unable to recover it. 00:29:27.562 [2024-07-12 11:07:44.322601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.562 [2024-07-12 11:07:44.322630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.562 qpair failed and we were unable to recover it. 00:29:27.562 [2024-07-12 11:07:44.323011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.562 [2024-07-12 11:07:44.323040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.562 qpair failed and we were unable to recover it. 00:29:27.562 [2024-07-12 11:07:44.323450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.562 [2024-07-12 11:07:44.323480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.562 qpair failed and we were unable to recover it. 00:29:27.562 [2024-07-12 11:07:44.323902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.562 [2024-07-12 11:07:44.323930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.562 qpair failed and we were unable to recover it. 00:29:27.562 [2024-07-12 11:07:44.324356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.562 [2024-07-12 11:07:44.324387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.562 qpair failed and we were unable to recover it. 00:29:27.562 [2024-07-12 11:07:44.324819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.562 [2024-07-12 11:07:44.324847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.562 qpair failed and we were unable to recover it. 00:29:27.562 [2024-07-12 11:07:44.325366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.562 [2024-07-12 11:07:44.325468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.562 qpair failed and we were unable to recover it. 00:29:27.562 [2024-07-12 11:07:44.325921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.562 [2024-07-12 11:07:44.325968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.562 qpair failed and we were unable to recover it. 00:29:27.562 [2024-07-12 11:07:44.326429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.562 [2024-07-12 11:07:44.326460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.562 qpair failed and we were unable to recover it. 00:29:27.562 [2024-07-12 11:07:44.326864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.562 [2024-07-12 11:07:44.326892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.562 qpair failed and we were unable to recover it. 00:29:27.562 [2024-07-12 11:07:44.327225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.562 [2024-07-12 11:07:44.327259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.562 qpair failed and we were unable to recover it. 00:29:27.562 [2024-07-12 11:07:44.327709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.562 [2024-07-12 11:07:44.327739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.562 qpair failed and we were unable to recover it. 00:29:27.562 [2024-07-12 11:07:44.328165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.562 [2024-07-12 11:07:44.328196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.562 qpair failed and we were unable to recover it. 00:29:27.562 [2024-07-12 11:07:44.328643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.562 [2024-07-12 11:07:44.328671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.562 qpair failed and we were unable to recover it. 00:29:27.562 [2024-07-12 11:07:44.329099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.562 [2024-07-12 11:07:44.329138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.562 qpair failed and we were unable to recover it. 00:29:27.562 [2024-07-12 11:07:44.329547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.562 [2024-07-12 11:07:44.329576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.562 qpair failed and we were unable to recover it. 00:29:27.562 [2024-07-12 11:07:44.330000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.562 [2024-07-12 11:07:44.330028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.562 qpair failed and we were unable to recover it. 00:29:27.562 [2024-07-12 11:07:44.330512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.562 [2024-07-12 11:07:44.330541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.562 qpair failed and we were unable to recover it. 00:29:27.563 [2024-07-12 11:07:44.330974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.563 [2024-07-12 11:07:44.331002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.563 qpair failed and we were unable to recover it. 00:29:27.563 [2024-07-12 11:07:44.331410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.563 [2024-07-12 11:07:44.331439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.563 qpair failed and we were unable to recover it. 00:29:27.563 [2024-07-12 11:07:44.331863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.563 [2024-07-12 11:07:44.331891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.563 qpair failed and we were unable to recover it. 00:29:27.563 [2024-07-12 11:07:44.332310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.563 [2024-07-12 11:07:44.332341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.563 qpair failed and we were unable to recover it. 00:29:27.563 [2024-07-12 11:07:44.332768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.563 [2024-07-12 11:07:44.332796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.563 qpair failed and we were unable to recover it. 00:29:27.563 [2024-07-12 11:07:44.333221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.563 [2024-07-12 11:07:44.333250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.563 qpair failed and we were unable to recover it. 00:29:27.563 [2024-07-12 11:07:44.333678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.563 [2024-07-12 11:07:44.333706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.563 qpair failed and we were unable to recover it. 00:29:27.563 [2024-07-12 11:07:44.334131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.563 [2024-07-12 11:07:44.334161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.563 qpair failed and we were unable to recover it. 00:29:27.563 [2024-07-12 11:07:44.334457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.563 [2024-07-12 11:07:44.334489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.563 qpair failed and we were unable to recover it. 00:29:27.563 [2024-07-12 11:07:44.334801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.563 [2024-07-12 11:07:44.334833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.563 qpair failed and we were unable to recover it. 00:29:27.563 [2024-07-12 11:07:44.335220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.563 [2024-07-12 11:07:44.335250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.563 qpair failed and we were unable to recover it. 00:29:27.563 [2024-07-12 11:07:44.335670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.563 [2024-07-12 11:07:44.335699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.563 qpair failed and we were unable to recover it. 00:29:27.563 [2024-07-12 11:07:44.336120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.563 [2024-07-12 11:07:44.336164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.563 qpair failed and we were unable to recover it. 00:29:27.563 [2024-07-12 11:07:44.336662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.563 [2024-07-12 11:07:44.336690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.563 qpair failed and we were unable to recover it. 00:29:27.563 [2024-07-12 11:07:44.337116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.563 [2024-07-12 11:07:44.337157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.563 qpair failed and we were unable to recover it. 00:29:27.563 [2024-07-12 11:07:44.337605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.563 [2024-07-12 11:07:44.337633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.563 qpair failed and we were unable to recover it. 00:29:27.563 [2024-07-12 11:07:44.338062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.563 [2024-07-12 11:07:44.338091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.563 qpair failed and we were unable to recover it. 00:29:27.563 [2024-07-12 11:07:44.338552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.563 [2024-07-12 11:07:44.338581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.563 qpair failed and we were unable to recover it. 00:29:27.563 [2024-07-12 11:07:44.339009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.563 [2024-07-12 11:07:44.339038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.563 qpair failed and we were unable to recover it. 00:29:27.563 [2024-07-12 11:07:44.339353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.563 [2024-07-12 11:07:44.339383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.563 qpair failed and we were unable to recover it. 00:29:27.563 [2024-07-12 11:07:44.339665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.563 [2024-07-12 11:07:44.339695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.563 qpair failed and we were unable to recover it. 00:29:27.563 [2024-07-12 11:07:44.340193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.563 [2024-07-12 11:07:44.340223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.563 qpair failed and we were unable to recover it. 00:29:27.563 [2024-07-12 11:07:44.340633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.563 [2024-07-12 11:07:44.340661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.563 qpair failed and we were unable to recover it. 00:29:27.563 [2024-07-12 11:07:44.341096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.563 [2024-07-12 11:07:44.341133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.563 qpair failed and we were unable to recover it. 00:29:27.563 [2024-07-12 11:07:44.341556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.563 [2024-07-12 11:07:44.341586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.563 qpair failed and we were unable to recover it. 00:29:27.563 [2024-07-12 11:07:44.342004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.563 [2024-07-12 11:07:44.342033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.563 qpair failed and we were unable to recover it. 00:29:27.563 [2024-07-12 11:07:44.342464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.563 [2024-07-12 11:07:44.342494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.563 qpair failed and we were unable to recover it. 00:29:27.563 [2024-07-12 11:07:44.342815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.563 [2024-07-12 11:07:44.342844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.563 qpair failed and we were unable to recover it. 00:29:27.563 [2024-07-12 11:07:44.343293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.563 [2024-07-12 11:07:44.343322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.563 qpair failed and we were unable to recover it. 00:29:27.563 [2024-07-12 11:07:44.343747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.563 [2024-07-12 11:07:44.343782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.563 qpair failed and we were unable to recover it. 00:29:27.563 [2024-07-12 11:07:44.344245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.563 [2024-07-12 11:07:44.344274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.563 qpair failed and we were unable to recover it. 00:29:27.563 [2024-07-12 11:07:44.344582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.563 [2024-07-12 11:07:44.344610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.563 qpair failed and we were unable to recover it. 00:29:27.563 [2024-07-12 11:07:44.345025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.563 [2024-07-12 11:07:44.345053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.563 qpair failed and we were unable to recover it. 00:29:27.563 [2024-07-12 11:07:44.345461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.563 [2024-07-12 11:07:44.345490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.563 qpair failed and we were unable to recover it. 00:29:27.563 [2024-07-12 11:07:44.345914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.563 [2024-07-12 11:07:44.345943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.563 qpair failed and we were unable to recover it. 00:29:27.563 [2024-07-12 11:07:44.346369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.563 [2024-07-12 11:07:44.346398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.563 qpair failed and we were unable to recover it. 00:29:27.563 [2024-07-12 11:07:44.346827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.563 [2024-07-12 11:07:44.346855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.563 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.347249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.347279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.347694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.347722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.348233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.348262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.348656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.348683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.349091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.349120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.349544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.349572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.349978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.350007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.350427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.350456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.350886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.350914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.351415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.351445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.351923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.351951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.352383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.352413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.352835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.352863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.353410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.353514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.353976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.354011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.354345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.354381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.354842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.354871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.355302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.355334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.355730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.355758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.356187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.356219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.356634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.356662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.356900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.356928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.357361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.357391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.357820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.357848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.358270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.358299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.358747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.358774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.359206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.359235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.359634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.359662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.359972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.360004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.360430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.360460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.360888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.360916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.361357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.361387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.361706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.361741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.362155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.362184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.362613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.362641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.363067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.363095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.363548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.363579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.364033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.564 [2024-07-12 11:07:44.364061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.564 qpair failed and we were unable to recover it. 00:29:27.564 [2024-07-12 11:07:44.364485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.565 [2024-07-12 11:07:44.364515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.565 qpair failed and we were unable to recover it. 00:29:27.565 [2024-07-12 11:07:44.364953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.565 [2024-07-12 11:07:44.364982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.565 qpair failed and we were unable to recover it. 00:29:27.565 [2024-07-12 11:07:44.365424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.565 [2024-07-12 11:07:44.365454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.565 qpair failed and we were unable to recover it. 00:29:27.565 [2024-07-12 11:07:44.365926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.565 [2024-07-12 11:07:44.365956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.565 qpair failed and we were unable to recover it. 00:29:27.565 [2024-07-12 11:07:44.366361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.565 [2024-07-12 11:07:44.366390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.565 qpair failed and we were unable to recover it. 00:29:27.565 [2024-07-12 11:07:44.366834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.565 [2024-07-12 11:07:44.366862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.565 qpair failed and we were unable to recover it. 00:29:27.565 [2024-07-12 11:07:44.367294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.565 [2024-07-12 11:07:44.367323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.565 qpair failed and we were unable to recover it. 00:29:27.565 [2024-07-12 11:07:44.367752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.565 [2024-07-12 11:07:44.367779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.565 qpair failed and we were unable to recover it. 00:29:27.565 [2024-07-12 11:07:44.368207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.565 [2024-07-12 11:07:44.368238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.565 qpair failed and we were unable to recover it. 00:29:27.565 [2024-07-12 11:07:44.368652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.565 [2024-07-12 11:07:44.368680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.565 qpair failed and we were unable to recover it. 00:29:27.565 [2024-07-12 11:07:44.369105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.565 [2024-07-12 11:07:44.369145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.565 qpair failed and we were unable to recover it. 00:29:27.565 [2024-07-12 11:07:44.369548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.565 [2024-07-12 11:07:44.369577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.565 qpair failed and we were unable to recover it. 00:29:27.565 [2024-07-12 11:07:44.369986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.565 [2024-07-12 11:07:44.370015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.565 qpair failed and we were unable to recover it. 00:29:27.565 [2024-07-12 11:07:44.370440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.565 [2024-07-12 11:07:44.370470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.565 qpair failed and we were unable to recover it. 00:29:27.565 [2024-07-12 11:07:44.370934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.565 [2024-07-12 11:07:44.370963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.565 qpair failed and we were unable to recover it. 00:29:27.565 [2024-07-12 11:07:44.371389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.565 [2024-07-12 11:07:44.371418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.565 qpair failed and we were unable to recover it. 00:29:27.565 [2024-07-12 11:07:44.371841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.565 [2024-07-12 11:07:44.371869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.565 qpair failed and we were unable to recover it. 00:29:27.565 [2024-07-12 11:07:44.372179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.565 [2024-07-12 11:07:44.372213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.565 qpair failed and we were unable to recover it. 00:29:27.565 [2024-07-12 11:07:44.372654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.565 [2024-07-12 11:07:44.372682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.565 qpair failed and we were unable to recover it. 00:29:27.565 [2024-07-12 11:07:44.373116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.565 [2024-07-12 11:07:44.373155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.565 qpair failed and we were unable to recover it. 00:29:27.565 [2024-07-12 11:07:44.373597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.565 [2024-07-12 11:07:44.373626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.565 qpair failed and we were unable to recover it. 00:29:27.565 [2024-07-12 11:07:44.374068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.565 [2024-07-12 11:07:44.374103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.565 qpair failed and we were unable to recover it. 00:29:27.565 [2024-07-12 11:07:44.374564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.565 [2024-07-12 11:07:44.374593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.565 qpair failed and we were unable to recover it. 00:29:27.565 [2024-07-12 11:07:44.375014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.565 [2024-07-12 11:07:44.375042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.565 qpair failed and we were unable to recover it. 00:29:27.565 [2024-07-12 11:07:44.375474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.565 [2024-07-12 11:07:44.375503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.565 qpair failed and we were unable to recover it. 00:29:27.565 [2024-07-12 11:07:44.375930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.565 [2024-07-12 11:07:44.375958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.565 qpair failed and we were unable to recover it. 00:29:27.565 [2024-07-12 11:07:44.376508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.565 [2024-07-12 11:07:44.376611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.565 qpair failed and we were unable to recover it. 00:29:27.565 [2024-07-12 11:07:44.377116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.565 [2024-07-12 11:07:44.377168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.565 qpair failed and we were unable to recover it. 00:29:27.565 [2024-07-12 11:07:44.377603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.565 [2024-07-12 11:07:44.377634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.565 qpair failed and we were unable to recover it. 00:29:27.565 [2024-07-12 11:07:44.378080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.565 [2024-07-12 11:07:44.378108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.565 qpair failed and we were unable to recover it. 00:29:27.565 [2024-07-12 11:07:44.378423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.565 [2024-07-12 11:07:44.378458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.565 qpair failed and we were unable to recover it. 00:29:27.565 [2024-07-12 11:07:44.378881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.565 [2024-07-12 11:07:44.378912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.565 qpair failed and we were unable to recover it. 00:29:27.565 [2024-07-12 11:07:44.379382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.565 [2024-07-12 11:07:44.379412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.565 qpair failed and we were unable to recover it. 00:29:27.565 [2024-07-12 11:07:44.379808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.565 [2024-07-12 11:07:44.379836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.565 qpair failed and we were unable to recover it. 00:29:27.565 [2024-07-12 11:07:44.380261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.565 [2024-07-12 11:07:44.380292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.565 qpair failed and we were unable to recover it. 00:29:27.566 [2024-07-12 11:07:44.380739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.566 [2024-07-12 11:07:44.380769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.566 qpair failed and we were unable to recover it. 00:29:27.566 [2024-07-12 11:07:44.381090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.566 [2024-07-12 11:07:44.381119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.566 qpair failed and we were unable to recover it. 00:29:27.566 [2024-07-12 11:07:44.381578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.566 [2024-07-12 11:07:44.381607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.566 qpair failed and we were unable to recover it. 00:29:27.566 [2024-07-12 11:07:44.382036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.566 [2024-07-12 11:07:44.382065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.566 qpair failed and we were unable to recover it. 00:29:27.566 [2024-07-12 11:07:44.382487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.566 [2024-07-12 11:07:44.382516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.566 qpair failed and we were unable to recover it. 00:29:27.566 [2024-07-12 11:07:44.382912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.566 [2024-07-12 11:07:44.382941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.566 qpair failed and we were unable to recover it. 00:29:27.566 [2024-07-12 11:07:44.383365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.566 [2024-07-12 11:07:44.383395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.566 qpair failed and we were unable to recover it. 00:29:27.566 [2024-07-12 11:07:44.383811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.566 [2024-07-12 11:07:44.383842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.566 qpair failed and we were unable to recover it. 00:29:27.566 [2024-07-12 11:07:44.384148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.566 [2024-07-12 11:07:44.384183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.566 qpair failed and we were unable to recover it. 00:29:27.566 [2024-07-12 11:07:44.384630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.566 [2024-07-12 11:07:44.384659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.566 qpair failed and we were unable to recover it. 00:29:27.566 [2024-07-12 11:07:44.385100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.566 [2024-07-12 11:07:44.385139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.566 qpair failed and we were unable to recover it. 00:29:27.566 [2024-07-12 11:07:44.385501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.566 [2024-07-12 11:07:44.385529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.566 qpair failed and we were unable to recover it. 00:29:27.566 [2024-07-12 11:07:44.385846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.566 [2024-07-12 11:07:44.385874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.566 qpair failed and we were unable to recover it. 00:29:27.566 [2024-07-12 11:07:44.386310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.566 [2024-07-12 11:07:44.386340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.566 qpair failed and we were unable to recover it. 00:29:27.566 [2024-07-12 11:07:44.386750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.566 [2024-07-12 11:07:44.386778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.566 qpair failed and we were unable to recover it. 00:29:27.566 [2024-07-12 11:07:44.387070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.566 [2024-07-12 11:07:44.387097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.566 qpair failed and we were unable to recover it. 00:29:27.566 [2024-07-12 11:07:44.387536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.566 [2024-07-12 11:07:44.387565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.566 qpair failed and we were unable to recover it. 00:29:27.566 [2024-07-12 11:07:44.387872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.566 [2024-07-12 11:07:44.387907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.566 qpair failed and we were unable to recover it. 00:29:27.566 [2024-07-12 11:07:44.388349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.566 [2024-07-12 11:07:44.388379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.566 qpair failed and we were unable to recover it. 00:29:27.566 [2024-07-12 11:07:44.388816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.566 [2024-07-12 11:07:44.388844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.566 qpair failed and we were unable to recover it. 00:29:27.566 [2024-07-12 11:07:44.389272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.566 [2024-07-12 11:07:44.389302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.566 qpair failed and we were unable to recover it. 00:29:27.566 [2024-07-12 11:07:44.389712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.566 [2024-07-12 11:07:44.389740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.566 qpair failed and we were unable to recover it. 00:29:27.566 [2024-07-12 11:07:44.390172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.566 [2024-07-12 11:07:44.390201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.566 qpair failed and we were unable to recover it. 00:29:27.566 [2024-07-12 11:07:44.390627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.566 [2024-07-12 11:07:44.390656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.566 qpair failed and we were unable to recover it. 00:29:27.566 [2024-07-12 11:07:44.391078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.566 [2024-07-12 11:07:44.391107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.566 qpair failed and we were unable to recover it. 00:29:27.566 [2024-07-12 11:07:44.391573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.566 [2024-07-12 11:07:44.391604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.566 qpair failed and we were unable to recover it. 00:29:27.566 [2024-07-12 11:07:44.392030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.566 [2024-07-12 11:07:44.392066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.566 qpair failed and we were unable to recover it. 00:29:27.566 [2024-07-12 11:07:44.392412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.566 [2024-07-12 11:07:44.392443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.566 qpair failed and we were unable to recover it. 00:29:27.566 [2024-07-12 11:07:44.392886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.566 [2024-07-12 11:07:44.392914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.566 qpair failed and we were unable to recover it. 00:29:27.566 [2024-07-12 11:07:44.393336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.566 [2024-07-12 11:07:44.393366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.566 qpair failed and we were unable to recover it. 00:29:27.566 [2024-07-12 11:07:44.393791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.566 [2024-07-12 11:07:44.393819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.566 qpair failed and we were unable to recover it. 00:29:27.566 [2024-07-12 11:07:44.394243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.566 [2024-07-12 11:07:44.394272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.566 qpair failed and we were unable to recover it. 00:29:27.566 [2024-07-12 11:07:44.394688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.394716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.395118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.395162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.395457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.395488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.395898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.395928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.396346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.396376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.396559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.396591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.397036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.397065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.397560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.397590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.398017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.398047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.398499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.398528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.398954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.398982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.399394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.399423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.399739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.399766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.400187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.400216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.400614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.400643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.401102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.401144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.401563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.401592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.402021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.402049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.402449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.402478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.402907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.402935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.403352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.403383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.403808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.403837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.404272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.404302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.404706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.404734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.405169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.405198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.405636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.405664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.406087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.406115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.406543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.406571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.407069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.407097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.407465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.407495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.407915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.407944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.408413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.408443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.408863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.408891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.409320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.409349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.409781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.409821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.410135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.410164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.410652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.410681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.411081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.411108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.411587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.411617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.411889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.411920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.567 [2024-07-12 11:07:44.412340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.567 [2024-07-12 11:07:44.412445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.567 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.412941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.412977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.413458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.413489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.413909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.413937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.414363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.414392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.414812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.414841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.415269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.415299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.415737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.415765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.416113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.416182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.416619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.416651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.417065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.417094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.417429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.417470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.417954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.417985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.418494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.418524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.418976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.419004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.419413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.419442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.419874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.419903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.420159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.420189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.420496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.420527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.420937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.420965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.421296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.421325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.421772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.421801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.422277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.422307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.422814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.422842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.423245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.423275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.423693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.423721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.424149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.424178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.424492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.424523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.424968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.424996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.425421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.425451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.425821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.425849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.426268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.426299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.426736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.426765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.427191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.427220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.427623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.427659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.427977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.428007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.428432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.428461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.428696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.428726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.429144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.429174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.429602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.429630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.430056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.568 [2024-07-12 11:07:44.430084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.568 qpair failed and we were unable to recover it. 00:29:27.568 [2024-07-12 11:07:44.430523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.430552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.431001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.431029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.431456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.431486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.431949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.431978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.432408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.432437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.432895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.432923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.433354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.433384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.433805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.433834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.434254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.434283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.434683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.434723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.435148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.435178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.435472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.435500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.435937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.435965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.436387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.436416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.436846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.436876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.437305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.437333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.437761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.437789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.438221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.438251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.438684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.438712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.439218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.439249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.439709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.439738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.440074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.440102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.440490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.440518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.440949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.440978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.441416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.441445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.441864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.441891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.442205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.442239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.442661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.442690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.443003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.443032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.443490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.443519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.443824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.443853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.444165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.444194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.444675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.444703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.445184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.445220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.445634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.445662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.446082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.446112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.446415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.446444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.446900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.446928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.447341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.447372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.569 [2024-07-12 11:07:44.447685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.569 [2024-07-12 11:07:44.447713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.569 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.448141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.448170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.448518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.448546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.448988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.449016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.449420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.449449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.449885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.449913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.450348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.450377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.450814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.450842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.451166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.451202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.451629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.451658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.452099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.452140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.452648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.452676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.453080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.453109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.453598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.453627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.453850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.453880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.454276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.454306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.454739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.454768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.455231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.455261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.455545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.455573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.456015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.456043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.456450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.456478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.456946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.456975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.457347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.457376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.457767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.457795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.458222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.458252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.458660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.458688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.459118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.459169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.459607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.459635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.460075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.460103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.460519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.460548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.460978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.461006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.461412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.461441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.461861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.461889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.462318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.462347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.462770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.462804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.463100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.463143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.463593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.463621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.464049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.464077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.464508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.464537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.464961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.464989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.465421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.570 [2024-07-12 11:07:44.465452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.570 qpair failed and we were unable to recover it. 00:29:27.570 [2024-07-12 11:07:44.465874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.571 [2024-07-12 11:07:44.465902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.571 qpair failed and we were unable to recover it. 00:29:27.571 [2024-07-12 11:07:44.466322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.571 [2024-07-12 11:07:44.466351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.571 qpair failed and we were unable to recover it. 00:29:27.571 [2024-07-12 11:07:44.466780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.571 [2024-07-12 11:07:44.466808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.571 qpair failed and we were unable to recover it. 00:29:27.571 [2024-07-12 11:07:44.467247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.571 [2024-07-12 11:07:44.467276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.571 qpair failed and we were unable to recover it. 00:29:27.571 [2024-07-12 11:07:44.467578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.571 [2024-07-12 11:07:44.467608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.571 qpair failed and we were unable to recover it. 00:29:27.571 [2024-07-12 11:07:44.468060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.571 [2024-07-12 11:07:44.468088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.571 qpair failed and we were unable to recover it. 00:29:27.571 [2024-07-12 11:07:44.468467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.571 [2024-07-12 11:07:44.468496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.571 qpair failed and we were unable to recover it. 00:29:27.571 [2024-07-12 11:07:44.468944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.571 [2024-07-12 11:07:44.468972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.571 qpair failed and we were unable to recover it. 00:29:27.571 [2024-07-12 11:07:44.469405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.571 [2024-07-12 11:07:44.469433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.571 qpair failed and we were unable to recover it. 00:29:27.571 [2024-07-12 11:07:44.469869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.571 [2024-07-12 11:07:44.469896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.571 qpair failed and we were unable to recover it. 00:29:27.571 [2024-07-12 11:07:44.470309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.571 [2024-07-12 11:07:44.470338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.571 qpair failed and we were unable to recover it. 00:29:27.571 [2024-07-12 11:07:44.470782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.571 [2024-07-12 11:07:44.470810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.571 qpair failed and we were unable to recover it. 00:29:27.571 [2024-07-12 11:07:44.471236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.571 [2024-07-12 11:07:44.471265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.571 qpair failed and we were unable to recover it. 00:29:27.571 [2024-07-12 11:07:44.471724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.571 [2024-07-12 11:07:44.471752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.571 qpair failed and we were unable to recover it. 00:29:27.571 [2024-07-12 11:07:44.472173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.571 [2024-07-12 11:07:44.472203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.571 qpair failed and we were unable to recover it. 00:29:27.571 [2024-07-12 11:07:44.472631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.571 [2024-07-12 11:07:44.472659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.571 qpair failed and we were unable to recover it. 00:29:27.571 [2024-07-12 11:07:44.473140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.571 [2024-07-12 11:07:44.473169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.571 qpair failed and we were unable to recover it. 00:29:27.571 [2024-07-12 11:07:44.473573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.571 [2024-07-12 11:07:44.473600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.571 qpair failed and we were unable to recover it. 00:29:27.571 [2024-07-12 11:07:44.473897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.571 [2024-07-12 11:07:44.473927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.571 qpair failed and we were unable to recover it. 00:29:27.571 [2024-07-12 11:07:44.474340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.571 [2024-07-12 11:07:44.474369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.571 qpair failed and we were unable to recover it. 00:29:27.571 [2024-07-12 11:07:44.474796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.571 [2024-07-12 11:07:44.474824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.571 qpair failed and we were unable to recover it. 00:29:27.571 [2024-07-12 11:07:44.475261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.571 [2024-07-12 11:07:44.475290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.571 qpair failed and we were unable to recover it. 00:29:27.571 [2024-07-12 11:07:44.475717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.571 [2024-07-12 11:07:44.475744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.571 qpair failed and we were unable to recover it. 00:29:27.571 [2024-07-12 11:07:44.476173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.571 [2024-07-12 11:07:44.476201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.571 qpair failed and we were unable to recover it. 00:29:27.571 [2024-07-12 11:07:44.476527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.571 [2024-07-12 11:07:44.476555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.571 qpair failed and we were unable to recover it. 00:29:27.571 [2024-07-12 11:07:44.476996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.571 [2024-07-12 11:07:44.477023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.571 qpair failed and we were unable to recover it. 00:29:27.571 [2024-07-12 11:07:44.477420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.571 [2024-07-12 11:07:44.477448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.571 qpair failed and we were unable to recover it. 00:29:27.571 [2024-07-12 11:07:44.477839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.571 [2024-07-12 11:07:44.477867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.571 qpair failed and we were unable to recover it. 00:29:27.571 [2024-07-12 11:07:44.478282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.571 [2024-07-12 11:07:44.478311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.571 qpair failed and we were unable to recover it. 00:29:27.571 [2024-07-12 11:07:44.478724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.571 [2024-07-12 11:07:44.478752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.571 qpair failed and we were unable to recover it. 00:29:27.571 [2024-07-12 11:07:44.479181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.571 [2024-07-12 11:07:44.479210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.571 qpair failed and we were unable to recover it. 00:29:27.571 [2024-07-12 11:07:44.479710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.571 [2024-07-12 11:07:44.479737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.571 qpair failed and we were unable to recover it. 00:29:27.571 [2024-07-12 11:07:44.480141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.571 [2024-07-12 11:07:44.480170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.571 qpair failed and we were unable to recover it. 00:29:27.571 [2024-07-12 11:07:44.480582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.571 [2024-07-12 11:07:44.480617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.481024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.481052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.481451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.481480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.481906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.481934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.482334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.482363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.482809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.482837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.483269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.483298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.483577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.483605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.484023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.484051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.484458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.484487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.484917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.484944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.485367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.485396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.485835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.485863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.486280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.486309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.486753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.486781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.487093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.487133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.487535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.487564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.487893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.487921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.488331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.488360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.488790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.488817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.489247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.489276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.489742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.489769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.490167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.490196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.490608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.490636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.490952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.490980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.491495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.491524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.491957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.491985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.492398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.492428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.492868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.492896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.493326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.493355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.493774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.493801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.494227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.494256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.494670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.494698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.495118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.495156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.495567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.495596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.496019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.496047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.496450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.496479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.496914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.496943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.497227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.497258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.497596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.497625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.572 [2024-07-12 11:07:44.497955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.572 [2024-07-12 11:07:44.497997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.572 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.498465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.498495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.498916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.498944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.499372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.499402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.499833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.499861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.500287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.500315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.500669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.500696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.501132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.501161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.501572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.501600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.502033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.502060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.502501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.502531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.502954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.502982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.503314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.503344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.503767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.503795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.504162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.504192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.504620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.504648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.505075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.505102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.505523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.505551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.505965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.505993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.506416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.506446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.506871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.506899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.507331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.507360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.507788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.507815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.508255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.508283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.508754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.508782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.509186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.509215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.509645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.509673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.510077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.510105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.510541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.510570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.510994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.511021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.511433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.511463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.511898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.511925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.512353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.512382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.512694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.512722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.513141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.513171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.513606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.513633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.514059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.514087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.514404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.514433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.514859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.514887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.515312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.515341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.573 [2024-07-12 11:07:44.515763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.573 [2024-07-12 11:07:44.515797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.573 qpair failed and we were unable to recover it. 00:29:27.574 [2024-07-12 11:07:44.516205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.574 [2024-07-12 11:07:44.516234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.574 qpair failed and we were unable to recover it. 00:29:27.574 [2024-07-12 11:07:44.516687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.574 [2024-07-12 11:07:44.516716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.574 qpair failed and we were unable to recover it. 00:29:27.574 [2024-07-12 11:07:44.517146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.574 [2024-07-12 11:07:44.517175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.574 qpair failed and we were unable to recover it. 00:29:27.574 [2024-07-12 11:07:44.517566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.574 [2024-07-12 11:07:44.517594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.574 qpair failed and we were unable to recover it. 00:29:27.574 [2024-07-12 11:07:44.518019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.574 [2024-07-12 11:07:44.518047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.574 qpair failed and we were unable to recover it. 00:29:27.574 [2024-07-12 11:07:44.518495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.574 [2024-07-12 11:07:44.518525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.574 qpair failed and we were unable to recover it. 00:29:27.574 [2024-07-12 11:07:44.518989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.574 [2024-07-12 11:07:44.519017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.574 qpair failed and we were unable to recover it. 00:29:27.574 [2024-07-12 11:07:44.519347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.574 [2024-07-12 11:07:44.519385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.574 qpair failed and we were unable to recover it. 00:29:27.574 [2024-07-12 11:07:44.519788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.574 [2024-07-12 11:07:44.519817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.574 qpair failed and we were unable to recover it. 00:29:27.574 [2024-07-12 11:07:44.520233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.574 [2024-07-12 11:07:44.520262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.574 qpair failed and we were unable to recover it. 00:29:27.574 [2024-07-12 11:07:44.520569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.574 [2024-07-12 11:07:44.520596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.574 qpair failed and we were unable to recover it. 00:29:27.574 [2024-07-12 11:07:44.521024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.574 [2024-07-12 11:07:44.521052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.574 qpair failed and we were unable to recover it. 00:29:27.574 [2024-07-12 11:07:44.521379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.574 [2024-07-12 11:07:44.521408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.574 qpair failed and we were unable to recover it. 00:29:27.574 [2024-07-12 11:07:44.521838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.574 [2024-07-12 11:07:44.521866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.574 qpair failed and we were unable to recover it. 00:29:27.574 [2024-07-12 11:07:44.522291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.574 [2024-07-12 11:07:44.522320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.574 qpair failed and we were unable to recover it. 00:29:27.574 [2024-07-12 11:07:44.522729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.574 [2024-07-12 11:07:44.522758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.574 qpair failed and we were unable to recover it. 00:29:27.574 [2024-07-12 11:07:44.523193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.574 [2024-07-12 11:07:44.523222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.574 qpair failed and we were unable to recover it. 00:29:27.574 [2024-07-12 11:07:44.523666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.574 [2024-07-12 11:07:44.523694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.574 qpair failed and we were unable to recover it. 00:29:27.574 [2024-07-12 11:07:44.524117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.574 [2024-07-12 11:07:44.524181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.574 qpair failed and we were unable to recover it. 00:29:27.574 [2024-07-12 11:07:44.524628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.574 [2024-07-12 11:07:44.524656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.574 qpair failed and we were unable to recover it. 00:29:27.574 [2024-07-12 11:07:44.525087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.574 [2024-07-12 11:07:44.525115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.574 qpair failed and we were unable to recover it. 00:29:27.574 [2024-07-12 11:07:44.525545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.574 [2024-07-12 11:07:44.525575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.574 qpair failed and we were unable to recover it. 00:29:27.574 [2024-07-12 11:07:44.526006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.574 [2024-07-12 11:07:44.526034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.574 qpair failed and we were unable to recover it. 00:29:27.574 [2024-07-12 11:07:44.526439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.574 [2024-07-12 11:07:44.526468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.574 qpair failed and we were unable to recover it. 00:29:27.574 [2024-07-12 11:07:44.526897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.574 [2024-07-12 11:07:44.526925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.574 qpair failed and we were unable to recover it. 00:29:27.574 [2024-07-12 11:07:44.527333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.574 [2024-07-12 11:07:44.527362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.574 qpair failed and we were unable to recover it. 00:29:27.574 [2024-07-12 11:07:44.527791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.574 [2024-07-12 11:07:44.527821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.574 qpair failed and we were unable to recover it. 00:29:27.574 [2024-07-12 11:07:44.528251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.574 [2024-07-12 11:07:44.528280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.574 qpair failed and we were unable to recover it. 00:29:27.846 [2024-07-12 11:07:44.528713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.846 [2024-07-12 11:07:44.528745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.846 qpair failed and we were unable to recover it. 00:29:27.846 [2024-07-12 11:07:44.529152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.846 [2024-07-12 11:07:44.529181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.846 qpair failed and we were unable to recover it. 00:29:27.846 [2024-07-12 11:07:44.529588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.846 [2024-07-12 11:07:44.529616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.846 qpair failed and we were unable to recover it. 00:29:27.846 [2024-07-12 11:07:44.530040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.846 [2024-07-12 11:07:44.530068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.846 qpair failed and we were unable to recover it. 00:29:27.846 [2024-07-12 11:07:44.530371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.846 [2024-07-12 11:07:44.530404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.846 qpair failed and we were unable to recover it. 00:29:27.846 [2024-07-12 11:07:44.530834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.846 [2024-07-12 11:07:44.530863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.846 qpair failed and we were unable to recover it. 00:29:27.846 [2024-07-12 11:07:44.531208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.846 [2024-07-12 11:07:44.531237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.846 qpair failed and we were unable to recover it. 00:29:27.846 [2024-07-12 11:07:44.531677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.846 [2024-07-12 11:07:44.531705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.846 qpair failed and we were unable to recover it. 00:29:27.846 [2024-07-12 11:07:44.531999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.846 [2024-07-12 11:07:44.532026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.846 qpair failed and we were unable to recover it. 00:29:27.846 [2024-07-12 11:07:44.532439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.846 [2024-07-12 11:07:44.532468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.846 qpair failed and we were unable to recover it. 00:29:27.846 [2024-07-12 11:07:44.532882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.846 [2024-07-12 11:07:44.532911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.846 qpair failed and we were unable to recover it. 00:29:27.846 [2024-07-12 11:07:44.533318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.846 [2024-07-12 11:07:44.533354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.846 qpair failed and we were unable to recover it. 00:29:27.846 [2024-07-12 11:07:44.533762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.846 [2024-07-12 11:07:44.533790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.846 qpair failed and we were unable to recover it. 00:29:27.846 [2024-07-12 11:07:44.534227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.846 [2024-07-12 11:07:44.534256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.846 qpair failed and we were unable to recover it. 00:29:27.846 [2024-07-12 11:07:44.534677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.846 [2024-07-12 11:07:44.534704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.846 qpair failed and we were unable to recover it. 00:29:27.846 [2024-07-12 11:07:44.535107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.846 [2024-07-12 11:07:44.535148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.846 qpair failed and we were unable to recover it. 00:29:27.846 [2024-07-12 11:07:44.535559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.846 [2024-07-12 11:07:44.535588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.846 qpair failed and we were unable to recover it. 00:29:27.846 [2024-07-12 11:07:44.536007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.846 [2024-07-12 11:07:44.536036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.846 qpair failed and we were unable to recover it. 00:29:27.846 [2024-07-12 11:07:44.536457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.846 [2024-07-12 11:07:44.536487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.846 qpair failed and we were unable to recover it. 00:29:27.846 [2024-07-12 11:07:44.536917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.846 [2024-07-12 11:07:44.536945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.846 qpair failed and we were unable to recover it. 00:29:27.846 [2024-07-12 11:07:44.537374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.846 [2024-07-12 11:07:44.537403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.846 qpair failed and we were unable to recover it. 00:29:27.846 [2024-07-12 11:07:44.537832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.846 [2024-07-12 11:07:44.537860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.846 qpair failed and we were unable to recover it. 00:29:27.846 [2024-07-12 11:07:44.538267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.538296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.538730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.538757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.539160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.539188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.539620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.539648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.540080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.540108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.540560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.540589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.541018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.541046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.541356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.541385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.541803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.541831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.542256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.542284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.542713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.542740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.543058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.543086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.543412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.543445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.543935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.543963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.544354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.544383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.544816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.544844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.545208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.545239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.545663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.545692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.546131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.546159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.546608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.546636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.547060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.547089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.547501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.547530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.547960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.547987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.548400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.548429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.548714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.548742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.549166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.549196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.549626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.549653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.550084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.550112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.550401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.550431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.550852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.550886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.551202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.551231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.551668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.551696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.552133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.552162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.552624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.552652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.553077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.553105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.553433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.553461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.553880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.553908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.554257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.554286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.554710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.554737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.555168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.847 [2024-07-12 11:07:44.555197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.847 qpair failed and we were unable to recover it. 00:29:27.847 [2024-07-12 11:07:44.555495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.848 [2024-07-12 11:07:44.555523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.848 qpair failed and we were unable to recover it. 00:29:27.848 [2024-07-12 11:07:44.555746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.848 [2024-07-12 11:07:44.555777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.848 qpair failed and we were unable to recover it. 00:29:27.848 [2024-07-12 11:07:44.556209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.848 [2024-07-12 11:07:44.556238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.848 qpair failed and we were unable to recover it. 00:29:27.848 [2024-07-12 11:07:44.556664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.848 [2024-07-12 11:07:44.556693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.848 qpair failed and we were unable to recover it. 00:29:27.848 [2024-07-12 11:07:44.557117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.848 [2024-07-12 11:07:44.557156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.848 qpair failed and we were unable to recover it. 00:29:27.848 [2024-07-12 11:07:44.557558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.848 [2024-07-12 11:07:44.557586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.848 qpair failed and we were unable to recover it. 00:29:27.848 [2024-07-12 11:07:44.558016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.848 [2024-07-12 11:07:44.558044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.848 qpair failed and we were unable to recover it. 00:29:27.848 [2024-07-12 11:07:44.558440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.848 [2024-07-12 11:07:44.558469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.848 qpair failed and we were unable to recover it. 00:29:27.848 [2024-07-12 11:07:44.558893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.848 [2024-07-12 11:07:44.558922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.848 qpair failed and we were unable to recover it. 00:29:27.848 [2024-07-12 11:07:44.559344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.848 [2024-07-12 11:07:44.559372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.848 qpair failed and we were unable to recover it. 00:29:27.848 [2024-07-12 11:07:44.559802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.848 [2024-07-12 11:07:44.559830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.848 qpair failed and we were unable to recover it. 00:29:27.848 [2024-07-12 11:07:44.560256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.848 [2024-07-12 11:07:44.560285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.848 qpair failed and we were unable to recover it. 00:29:27.848 [2024-07-12 11:07:44.560712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.848 [2024-07-12 11:07:44.560739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.848 qpair failed and we were unable to recover it. 00:29:27.848 [2024-07-12 11:07:44.561168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.848 [2024-07-12 11:07:44.561197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.848 qpair failed and we were unable to recover it. 00:29:27.848 [2024-07-12 11:07:44.561664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.848 [2024-07-12 11:07:44.561692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.848 qpair failed and we were unable to recover it. 00:29:27.848 [2024-07-12 11:07:44.562165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.848 [2024-07-12 11:07:44.562195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.848 qpair failed and we were unable to recover it. 00:29:27.848 [2024-07-12 11:07:44.562641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.848 [2024-07-12 11:07:44.562670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.848 qpair failed and we were unable to recover it. 00:29:27.848 [2024-07-12 11:07:44.563103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.848 [2024-07-12 11:07:44.563140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.848 qpair failed and we were unable to recover it. 00:29:27.848 [2024-07-12 11:07:44.563526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.848 [2024-07-12 11:07:44.563554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.848 qpair failed and we were unable to recover it. 00:29:27.848 [2024-07-12 11:07:44.563992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.848 [2024-07-12 11:07:44.564020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.848 qpair failed and we were unable to recover it. 00:29:27.848 [2024-07-12 11:07:44.564342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.848 [2024-07-12 11:07:44.564370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.848 qpair failed and we were unable to recover it. 00:29:27.848 [2024-07-12 11:07:44.564783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.848 [2024-07-12 11:07:44.564810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.848 qpair failed and we were unable to recover it. 00:29:27.848 [2024-07-12 11:07:44.565221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.848 [2024-07-12 11:07:44.565250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.848 qpair failed and we were unable to recover it. 00:29:27.848 [2024-07-12 11:07:44.565683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.848 [2024-07-12 11:07:44.565711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.848 qpair failed and we were unable to recover it. 00:29:27.848 [2024-07-12 11:07:44.566157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.848 [2024-07-12 11:07:44.566188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.848 qpair failed and we were unable to recover it. 00:29:27.848 [2024-07-12 11:07:44.566488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.848 [2024-07-12 11:07:44.566520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.848 qpair failed and we were unable to recover it. 00:29:27.848 [2024-07-12 11:07:44.566812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.848 [2024-07-12 11:07:44.566844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.848 qpair failed and we were unable to recover it. 00:29:27.848 [2024-07-12 11:07:44.567266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.848 [2024-07-12 11:07:44.567296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.848 qpair failed and we were unable to recover it. 00:29:27.848 [2024-07-12 11:07:44.567724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.848 [2024-07-12 11:07:44.567753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.848 qpair failed and we were unable to recover it. 00:29:27.848 [2024-07-12 11:07:44.568188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.848 [2024-07-12 11:07:44.568225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.848 qpair failed and we were unable to recover it. 00:29:27.848 [2024-07-12 11:07:44.568621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.848 [2024-07-12 11:07:44.568651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.848 qpair failed and we were unable to recover it. 00:29:27.848 [2024-07-12 11:07:44.569051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.848 [2024-07-12 11:07:44.569081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.848 qpair failed and we were unable to recover it. 00:29:27.848 [2024-07-12 11:07:44.569411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.848 [2024-07-12 11:07:44.569441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.848 qpair failed and we were unable to recover it. 00:29:27.848 [2024-07-12 11:07:44.569871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.848 [2024-07-12 11:07:44.569900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.848 qpair failed and we were unable to recover it. 00:29:27.848 [2024-07-12 11:07:44.570331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.848 [2024-07-12 11:07:44.570362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.848 qpair failed and we were unable to recover it. 00:29:27.848 [2024-07-12 11:07:44.570790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.848 [2024-07-12 11:07:44.570818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.848 qpair failed and we were unable to recover it. 00:29:27.848 [2024-07-12 11:07:44.571248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.848 [2024-07-12 11:07:44.571278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.848 qpair failed and we were unable to recover it. 00:29:27.848 [2024-07-12 11:07:44.571713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.571742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.572165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.572195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.572508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.572541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.572969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.572998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.573416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.573445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.573869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.573897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.574213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.574242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.574659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.574687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.575160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.575190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.575633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.575661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.576091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.576119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.576586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.576614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.577010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.577038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.577462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.577490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.577916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.577943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.578367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.578396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.578819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.578847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.579261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.579290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.579687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.579714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.580149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.580179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.580637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.580664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.581107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.581145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.581587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.581615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.582046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.582074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.582516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.582545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.582967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.582996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.583439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.583469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.583890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.583918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.584357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.584386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.584810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.584838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.585291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.585319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.585785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.585811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.586234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.586275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.586696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.586724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.587188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.587218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.587525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.587556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.587982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.588011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.588499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.588528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.588948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.588977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.589380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.589410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.849 [2024-07-12 11:07:44.589831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.849 [2024-07-12 11:07:44.589860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.849 qpair failed and we were unable to recover it. 00:29:27.850 [2024-07-12 11:07:44.590273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.850 [2024-07-12 11:07:44.590303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.850 qpair failed and we were unable to recover it. 00:29:27.850 [2024-07-12 11:07:44.590748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.850 [2024-07-12 11:07:44.590775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.850 qpair failed and we were unable to recover it. 00:29:27.850 [2024-07-12 11:07:44.591081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.850 [2024-07-12 11:07:44.591110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.850 qpair failed and we were unable to recover it. 00:29:27.850 [2024-07-12 11:07:44.591548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.850 [2024-07-12 11:07:44.591577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.850 qpair failed and we were unable to recover it. 00:29:27.850 [2024-07-12 11:07:44.592006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.850 [2024-07-12 11:07:44.592035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.850 qpair failed and we were unable to recover it. 00:29:27.850 [2024-07-12 11:07:44.592484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.850 [2024-07-12 11:07:44.592515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.850 qpair failed and we were unable to recover it. 00:29:27.850 [2024-07-12 11:07:44.592926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.850 [2024-07-12 11:07:44.592955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.850 qpair failed and we were unable to recover it. 00:29:27.850 [2024-07-12 11:07:44.593405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.850 [2024-07-12 11:07:44.593436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.850 qpair failed and we were unable to recover it. 00:29:27.850 [2024-07-12 11:07:44.593750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.850 [2024-07-12 11:07:44.593780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.850 qpair failed and we were unable to recover it. 00:29:27.850 [2024-07-12 11:07:44.594065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.850 [2024-07-12 11:07:44.594096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.850 qpair failed and we were unable to recover it. 00:29:27.850 [2024-07-12 11:07:44.594558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.850 [2024-07-12 11:07:44.594587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.850 qpair failed and we were unable to recover it. 00:29:27.850 [2024-07-12 11:07:44.594990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.850 [2024-07-12 11:07:44.595019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.850 qpair failed and we were unable to recover it. 00:29:27.850 [2024-07-12 11:07:44.595431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.850 [2024-07-12 11:07:44.595459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.850 qpair failed and we were unable to recover it. 00:29:27.850 [2024-07-12 11:07:44.595881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.850 [2024-07-12 11:07:44.595910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.850 qpair failed and we were unable to recover it. 00:29:27.850 [2024-07-12 11:07:44.596316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.850 [2024-07-12 11:07:44.596346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.850 qpair failed and we were unable to recover it. 00:29:27.850 [2024-07-12 11:07:44.596772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.850 [2024-07-12 11:07:44.596800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.850 qpair failed and we were unable to recover it. 00:29:27.850 [2024-07-12 11:07:44.597231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.850 [2024-07-12 11:07:44.597260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.850 qpair failed and we were unable to recover it. 00:29:27.850 [2024-07-12 11:07:44.597721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.850 [2024-07-12 11:07:44.597749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.850 qpair failed and we were unable to recover it. 00:29:27.850 [2024-07-12 11:07:44.598180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.850 [2024-07-12 11:07:44.598216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.850 qpair failed and we were unable to recover it. 00:29:27.850 [2024-07-12 11:07:44.598680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.850 [2024-07-12 11:07:44.598708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.850 qpair failed and we were unable to recover it. 00:29:27.850 [2024-07-12 11:07:44.599008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.850 [2024-07-12 11:07:44.599039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.850 qpair failed and we were unable to recover it. 00:29:27.850 [2024-07-12 11:07:44.599531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.850 [2024-07-12 11:07:44.599561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.850 qpair failed and we were unable to recover it. 00:29:27.850 [2024-07-12 11:07:44.599872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.850 [2024-07-12 11:07:44.599899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.850 qpair failed and we were unable to recover it. 00:29:27.850 [2024-07-12 11:07:44.600322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.850 [2024-07-12 11:07:44.600351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.850 qpair failed and we were unable to recover it. 00:29:27.850 [2024-07-12 11:07:44.600779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.850 [2024-07-12 11:07:44.600806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.850 qpair failed and we were unable to recover it. 00:29:27.850 [2024-07-12 11:07:44.601232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.850 [2024-07-12 11:07:44.601260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.850 qpair failed and we were unable to recover it. 00:29:27.850 [2024-07-12 11:07:44.601678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.850 [2024-07-12 11:07:44.601706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.850 qpair failed and we were unable to recover it. 00:29:27.850 [2024-07-12 11:07:44.602192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.850 [2024-07-12 11:07:44.602221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.850 qpair failed and we were unable to recover it. 00:29:27.850 [2024-07-12 11:07:44.602662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.850 [2024-07-12 11:07:44.602690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.850 qpair failed and we were unable to recover it. 00:29:27.850 [2024-07-12 11:07:44.603119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.850 [2024-07-12 11:07:44.603161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.850 qpair failed and we were unable to recover it. 00:29:27.850 [2024-07-12 11:07:44.603603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.603630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.604035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.604063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.604524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.604554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.604977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.605006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.605412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.605442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.605759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.605786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.606216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.606244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.606676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.606703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.607151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.607179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.607650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.607678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.608143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.608172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.608610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.608637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.609063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.609091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.609505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.609535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.609965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.609993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.610409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.610440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.610866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.610895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.611314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.611342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.611785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.611813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.612043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.612071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.612381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.612411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.612826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.612854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.613272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.613302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.613753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.613781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.614206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.614236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.614634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.614661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.615088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.615116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.615535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.615564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.615991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.616025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.616439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.616468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.616899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.616927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.617345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.617374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.617793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.617820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.618205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.618234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.618660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.618688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.619117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.619157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.619594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.619622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.620089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.620118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.620574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.620603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.621030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.851 [2024-07-12 11:07:44.621058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.851 qpair failed and we were unable to recover it. 00:29:27.851 [2024-07-12 11:07:44.621496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.852 [2024-07-12 11:07:44.621526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.852 qpair failed and we were unable to recover it. 00:29:27.852 [2024-07-12 11:07:44.621955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.852 [2024-07-12 11:07:44.621983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.852 qpair failed and we were unable to recover it. 00:29:27.852 [2024-07-12 11:07:44.622383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.852 [2024-07-12 11:07:44.622412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.852 qpair failed and we were unable to recover it. 00:29:27.852 [2024-07-12 11:07:44.622836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.852 [2024-07-12 11:07:44.622864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.852 qpair failed and we were unable to recover it. 00:29:27.852 [2024-07-12 11:07:44.623295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.852 [2024-07-12 11:07:44.623323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.852 qpair failed and we were unable to recover it. 00:29:27.852 [2024-07-12 11:07:44.623750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.852 [2024-07-12 11:07:44.623778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.852 qpair failed and we were unable to recover it. 00:29:27.852 [2024-07-12 11:07:44.624203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.852 [2024-07-12 11:07:44.624235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.852 qpair failed and we were unable to recover it. 00:29:27.852 [2024-07-12 11:07:44.624661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.852 [2024-07-12 11:07:44.624689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.852 qpair failed and we were unable to recover it. 00:29:27.852 [2024-07-12 11:07:44.625117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.852 [2024-07-12 11:07:44.625175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.852 qpair failed and we were unable to recover it. 00:29:27.852 [2024-07-12 11:07:44.625506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.852 [2024-07-12 11:07:44.625537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.852 qpair failed and we were unable to recover it. 00:29:27.852 [2024-07-12 11:07:44.626025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.852 [2024-07-12 11:07:44.626053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.852 qpair failed and we were unable to recover it. 00:29:27.852 [2024-07-12 11:07:44.626359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.852 [2024-07-12 11:07:44.626389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.852 qpair failed and we were unable to recover it. 00:29:27.852 [2024-07-12 11:07:44.626816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.852 [2024-07-12 11:07:44.626843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.852 qpair failed and we were unable to recover it. 00:29:27.852 [2024-07-12 11:07:44.627320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.852 [2024-07-12 11:07:44.627350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.852 qpair failed and we were unable to recover it. 00:29:27.852 [2024-07-12 11:07:44.627770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.852 [2024-07-12 11:07:44.627798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.852 qpair failed and we were unable to recover it. 00:29:27.852 [2024-07-12 11:07:44.628232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.852 [2024-07-12 11:07:44.628261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.852 qpair failed and we were unable to recover it. 00:29:27.852 [2024-07-12 11:07:44.628667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.852 [2024-07-12 11:07:44.628695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.852 qpair failed and we were unable to recover it. 00:29:27.852 [2024-07-12 11:07:44.629132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.852 [2024-07-12 11:07:44.629161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.852 qpair failed and we were unable to recover it. 00:29:27.852 [2024-07-12 11:07:44.629485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.852 [2024-07-12 11:07:44.629517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.852 qpair failed and we were unable to recover it. 00:29:27.852 [2024-07-12 11:07:44.629921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.852 [2024-07-12 11:07:44.629951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.852 qpair failed and we were unable to recover it. 00:29:27.852 [2024-07-12 11:07:44.630368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.852 [2024-07-12 11:07:44.630398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.852 qpair failed and we were unable to recover it. 00:29:27.852 [2024-07-12 11:07:44.630819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.852 [2024-07-12 11:07:44.630848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.852 qpair failed and we were unable to recover it. 00:29:27.852 [2024-07-12 11:07:44.631181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.852 [2024-07-12 11:07:44.631211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.852 qpair failed and we were unable to recover it. 00:29:27.852 [2024-07-12 11:07:44.631619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.852 [2024-07-12 11:07:44.631648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.852 qpair failed and we were unable to recover it. 00:29:27.852 [2024-07-12 11:07:44.632077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.852 [2024-07-12 11:07:44.632106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.852 qpair failed and we were unable to recover it. 00:29:27.852 [2024-07-12 11:07:44.632577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.852 [2024-07-12 11:07:44.632606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.852 qpair failed and we were unable to recover it. 00:29:27.852 [2024-07-12 11:07:44.633006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.852 [2024-07-12 11:07:44.633035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.852 qpair failed and we were unable to recover it. 00:29:27.852 [2024-07-12 11:07:44.633447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.852 [2024-07-12 11:07:44.633476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.852 qpair failed and we were unable to recover it. 00:29:27.852 [2024-07-12 11:07:44.633901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.852 [2024-07-12 11:07:44.633936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.852 qpair failed and we were unable to recover it. 00:29:27.852 [2024-07-12 11:07:44.634290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.852 [2024-07-12 11:07:44.634320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.852 qpair failed and we were unable to recover it. 00:29:27.852 [2024-07-12 11:07:44.634744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.852 [2024-07-12 11:07:44.634772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.852 qpair failed and we were unable to recover it. 00:29:27.852 [2024-07-12 11:07:44.635196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.852 [2024-07-12 11:07:44.635225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.852 qpair failed and we were unable to recover it. 00:29:27.852 [2024-07-12 11:07:44.635648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.852 [2024-07-12 11:07:44.635678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.852 qpair failed and we were unable to recover it. 00:29:27.852 [2024-07-12 11:07:44.636092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.852 [2024-07-12 11:07:44.636120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.852 qpair failed and we were unable to recover it. 00:29:27.852 [2024-07-12 11:07:44.636559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.852 [2024-07-12 11:07:44.636587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.852 qpair failed and we were unable to recover it. 00:29:27.852 [2024-07-12 11:07:44.637018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.852 [2024-07-12 11:07:44.637047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.852 qpair failed and we were unable to recover it. 00:29:27.852 [2024-07-12 11:07:44.637478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.852 [2024-07-12 11:07:44.637508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.852 qpair failed and we were unable to recover it. 00:29:27.852 [2024-07-12 11:07:44.637900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.853 [2024-07-12 11:07:44.637929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.853 qpair failed and we were unable to recover it. 00:29:27.853 [2024-07-12 11:07:44.638325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.853 [2024-07-12 11:07:44.638354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.853 qpair failed and we were unable to recover it. 00:29:27.853 [2024-07-12 11:07:44.638785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.853 [2024-07-12 11:07:44.638813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.853 qpair failed and we were unable to recover it. 00:29:27.853 [2024-07-12 11:07:44.639229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.853 [2024-07-12 11:07:44.639257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.853 qpair failed and we were unable to recover it. 00:29:27.853 [2024-07-12 11:07:44.639669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.853 [2024-07-12 11:07:44.639697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.853 qpair failed and we were unable to recover it. 00:29:27.853 [2024-07-12 11:07:44.640132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.853 [2024-07-12 11:07:44.640161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.853 qpair failed and we were unable to recover it. 00:29:27.853 [2024-07-12 11:07:44.640601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.853 [2024-07-12 11:07:44.640630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.853 qpair failed and we were unable to recover it. 00:29:27.853 [2024-07-12 11:07:44.641054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.853 [2024-07-12 11:07:44.641083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.853 qpair failed and we were unable to recover it. 00:29:27.853 [2024-07-12 11:07:44.641530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.853 [2024-07-12 11:07:44.641560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.853 qpair failed and we were unable to recover it. 00:29:27.853 [2024-07-12 11:07:44.641958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.853 [2024-07-12 11:07:44.641986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.853 qpair failed and we were unable to recover it. 00:29:27.853 [2024-07-12 11:07:44.642405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.853 [2024-07-12 11:07:44.642435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.853 qpair failed and we were unable to recover it. 00:29:27.853 [2024-07-12 11:07:44.642862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.853 [2024-07-12 11:07:44.642890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.853 qpair failed and we were unable to recover it. 00:29:27.853 [2024-07-12 11:07:44.643317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.853 [2024-07-12 11:07:44.643346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.853 qpair failed and we were unable to recover it. 00:29:27.853 [2024-07-12 11:07:44.643773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.853 [2024-07-12 11:07:44.643802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.853 qpair failed and we were unable to recover it. 00:29:27.853 [2024-07-12 11:07:44.644099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.853 [2024-07-12 11:07:44.644138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.853 qpair failed and we were unable to recover it. 00:29:27.853 [2024-07-12 11:07:44.644570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.853 [2024-07-12 11:07:44.644598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.853 qpair failed and we were unable to recover it. 00:29:27.853 [2024-07-12 11:07:44.644958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.853 [2024-07-12 11:07:44.644986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.853 qpair failed and we were unable to recover it. 00:29:27.853 [2024-07-12 11:07:44.645314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.853 [2024-07-12 11:07:44.645343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.853 qpair failed and we were unable to recover it. 00:29:27.853 [2024-07-12 11:07:44.645749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.853 [2024-07-12 11:07:44.645778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.853 qpair failed and we were unable to recover it. 00:29:27.853 [2024-07-12 11:07:44.646205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.853 [2024-07-12 11:07:44.646234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.853 qpair failed and we were unable to recover it. 00:29:27.853 [2024-07-12 11:07:44.646575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.853 [2024-07-12 11:07:44.646603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.853 qpair failed and we were unable to recover it. 00:29:27.853 [2024-07-12 11:07:44.647036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.853 [2024-07-12 11:07:44.647064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.853 qpair failed and we were unable to recover it. 00:29:27.853 [2024-07-12 11:07:44.647413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.853 [2024-07-12 11:07:44.647441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.853 qpair failed and we were unable to recover it. 00:29:27.853 [2024-07-12 11:07:44.647885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.853 [2024-07-12 11:07:44.647914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.853 qpair failed and we were unable to recover it. 00:29:27.853 [2024-07-12 11:07:44.648343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.853 [2024-07-12 11:07:44.648373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.853 qpair failed and we were unable to recover it. 00:29:27.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2284555 Killed "${NVMF_APP[@]}" "$@" 00:29:27.853 [2024-07-12 11:07:44.648710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.853 [2024-07-12 11:07:44.648739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.853 qpair failed and we were unable to recover it. 00:29:27.853 [2024-07-12 11:07:44.649168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.853 [2024-07-12 11:07:44.649198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.853 qpair failed and we were unable to recover it. 00:29:27.853 11:07:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:27.853 [2024-07-12 11:07:44.649629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.853 [2024-07-12 11:07:44.649657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.853 qpair failed and we were unable to recover it. 00:29:27.853 11:07:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:27.853 [2024-07-12 11:07:44.650081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.853 [2024-07-12 11:07:44.650109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.853 qpair failed and we were unable to recover it. 00:29:27.853 11:07:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:27.853 11:07:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:27.853 [2024-07-12 11:07:44.650556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.853 [2024-07-12 11:07:44.650585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.853 qpair failed and we were unable to recover it. 00:29:27.853 11:07:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:27.853 [2024-07-12 11:07:44.651013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.853 [2024-07-12 11:07:44.651042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.853 qpair failed and we were unable to recover it. 00:29:27.853 [2024-07-12 11:07:44.651468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.853 [2024-07-12 11:07:44.651499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.853 qpair failed and we were unable to recover it. 00:29:27.853 [2024-07-12 11:07:44.651930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.853 [2024-07-12 11:07:44.651958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.853 qpair failed and we were unable to recover it. 00:29:27.853 [2024-07-12 11:07:44.652373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.853 [2024-07-12 11:07:44.652402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.853 qpair failed and we were unable to recover it. 00:29:27.853 [2024-07-12 11:07:44.652808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.853 [2024-07-12 11:07:44.652835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.853 qpair failed and we were unable to recover it. 00:29:27.853 [2024-07-12 11:07:44.653278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.853 [2024-07-12 11:07:44.653307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.853 qpair failed and we were unable to recover it. 00:29:27.853 [2024-07-12 11:07:44.653630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.853 [2024-07-12 11:07:44.653660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.853 qpair failed and we were unable to recover it. 00:29:27.853 [2024-07-12 11:07:44.654094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.854 [2024-07-12 11:07:44.654134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.854 qpair failed and we were unable to recover it. 00:29:27.854 [2024-07-12 11:07:44.654657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.854 [2024-07-12 11:07:44.654685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.854 qpair failed and we were unable to recover it. 00:29:27.854 [2024-07-12 11:07:44.655115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.854 [2024-07-12 11:07:44.655164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.854 qpair failed and we were unable to recover it. 00:29:27.854 [2024-07-12 11:07:44.655505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.854 [2024-07-12 11:07:44.655534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.854 qpair failed and we were unable to recover it. 00:29:27.854 [2024-07-12 11:07:44.655955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.854 [2024-07-12 11:07:44.655985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.854 qpair failed and we were unable to recover it. 00:29:27.854 [2024-07-12 11:07:44.656403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.854 [2024-07-12 11:07:44.656434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.854 qpair failed and we were unable to recover it. 00:29:27.854 [2024-07-12 11:07:44.656933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.854 [2024-07-12 11:07:44.656961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.854 qpair failed and we were unable to recover it. 00:29:27.854 [2024-07-12 11:07:44.657177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.854 [2024-07-12 11:07:44.657209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.854 qpair failed and we were unable to recover it. 00:29:27.854 [2024-07-12 11:07:44.657670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.854 [2024-07-12 11:07:44.657698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.854 qpair failed and we were unable to recover it. 00:29:27.854 [2024-07-12 11:07:44.658137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.854 [2024-07-12 11:07:44.658167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.854 qpair failed and we were unable to recover it. 00:29:27.854 [2024-07-12 11:07:44.658615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.854 [2024-07-12 11:07:44.658644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.854 qpair failed and we were unable to recover it. 00:29:27.854 11:07:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2285488 00:29:27.854 11:07:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2285488 00:29:27.854 [2024-07-12 11:07:44.659057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.854 [2024-07-12 11:07:44.659087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.854 qpair failed and we were unable to recover it. 00:29:27.854 11:07:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2285488 ']' 00:29:27.854 11:07:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:27.854 [2024-07-12 11:07:44.659533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.854 [2024-07-12 11:07:44.659563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.854 qpair failed and we were unable to recover it. 00:29:27.854 11:07:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:27.854 11:07:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:27.854 [2024-07-12 11:07:44.659935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.854 [2024-07-12 11:07:44.659965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.854 qpair failed and we were unable to recover it. 00:29:27.854 11:07:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:27.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:27.854 [2024-07-12 11:07:44.660361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.854 11:07:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:27.854 [2024-07-12 11:07:44.660392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.854 qpair failed and we were unable to recover it. 00:29:27.854 11:07:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:27.854 [2024-07-12 11:07:44.660825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.854 [2024-07-12 11:07:44.660855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.854 qpair failed and we were unable to recover it. 00:29:27.854 [2024-07-12 11:07:44.661180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.854 [2024-07-12 11:07:44.661209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.854 qpair failed and we were unable to recover it. 00:29:27.854 [2024-07-12 11:07:44.661610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.854 [2024-07-12 11:07:44.661638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.854 qpair failed and we were unable to recover it. 00:29:27.854 [2024-07-12 11:07:44.662087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.854 [2024-07-12 11:07:44.662121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.854 qpair failed and we were unable to recover it. 00:29:27.854 [2024-07-12 11:07:44.662570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.854 [2024-07-12 11:07:44.662600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.854 qpair failed and we were unable to recover it. 00:29:27.854 [2024-07-12 11:07:44.663033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.854 [2024-07-12 11:07:44.663062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.854 qpair failed and we were unable to recover it. 00:29:27.854 [2024-07-12 11:07:44.663503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.854 [2024-07-12 11:07:44.663533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.854 qpair failed and we were unable to recover it. 00:29:27.854 [2024-07-12 11:07:44.663969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.854 [2024-07-12 11:07:44.663998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.854 qpair failed and we were unable to recover it. 00:29:27.854 [2024-07-12 11:07:44.664314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.854 [2024-07-12 11:07:44.664344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.854 qpair failed and we were unable to recover it. 00:29:27.854 [2024-07-12 11:07:44.664608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.854 [2024-07-12 11:07:44.664636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.854 qpair failed and we were unable to recover it. 00:29:27.854 [2024-07-12 11:07:44.664941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.854 [2024-07-12 11:07:44.664970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.854 qpair failed and we were unable to recover it. 00:29:27.854 [2024-07-12 11:07:44.665377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.854 [2024-07-12 11:07:44.665407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.854 qpair failed and we were unable to recover it. 00:29:27.854 [2024-07-12 11:07:44.665830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.854 [2024-07-12 11:07:44.665867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.854 qpair failed and we were unable to recover it. 00:29:27.854 [2024-07-12 11:07:44.666268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.854 [2024-07-12 11:07:44.666299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.854 qpair failed and we were unable to recover it. 00:29:27.854 [2024-07-12 11:07:44.666731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.854 [2024-07-12 11:07:44.666759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.854 qpair failed and we were unable to recover it. 00:29:27.854 [2024-07-12 11:07:44.667053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.854 [2024-07-12 11:07:44.667082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.854 qpair failed and we were unable to recover it. 00:29:27.854 [2024-07-12 11:07:44.667550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.854 [2024-07-12 11:07:44.667579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.854 qpair failed and we were unable to recover it. 00:29:27.854 [2024-07-12 11:07:44.668037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.854 [2024-07-12 11:07:44.668067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.854 qpair failed and we were unable to recover it. 00:29:27.854 [2024-07-12 11:07:44.668550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.854 [2024-07-12 11:07:44.668581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.854 qpair failed and we were unable to recover it. 00:29:27.854 [2024-07-12 11:07:44.668998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.854 [2024-07-12 11:07:44.669027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.854 qpair failed and we were unable to recover it. 00:29:27.854 [2024-07-12 11:07:44.669430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.854 [2024-07-12 11:07:44.669460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.854 qpair failed and we were unable to recover it. 00:29:27.855 [2024-07-12 11:07:44.669956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.855 [2024-07-12 11:07:44.669986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.855 qpair failed and we were unable to recover it. 00:29:27.855 [2024-07-12 11:07:44.670434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.855 [2024-07-12 11:07:44.670464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.855 qpair failed and we were unable to recover it. 00:29:27.855 [2024-07-12 11:07:44.670882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.855 [2024-07-12 11:07:44.670911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.855 qpair failed and we were unable to recover it. 00:29:27.855 [2024-07-12 11:07:44.671208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.855 [2024-07-12 11:07:44.671238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.855 qpair failed and we were unable to recover it. 00:29:27.855 [2024-07-12 11:07:44.671664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.855 [2024-07-12 11:07:44.671692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.855 qpair failed and we were unable to recover it. 00:29:27.855 [2024-07-12 11:07:44.672110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.855 [2024-07-12 11:07:44.672152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.855 qpair failed and we were unable to recover it. 00:29:27.855 [2024-07-12 11:07:44.672616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.855 [2024-07-12 11:07:44.672646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.855 qpair failed and we were unable to recover it. 00:29:27.855 [2024-07-12 11:07:44.673149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.855 [2024-07-12 11:07:44.673179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.855 qpair failed and we were unable to recover it. 00:29:27.855 [2024-07-12 11:07:44.673532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.855 [2024-07-12 11:07:44.673561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.855 qpair failed and we were unable to recover it. 00:29:27.855 [2024-07-12 11:07:44.673875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.855 [2024-07-12 11:07:44.673906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.855 qpair failed and we were unable to recover it. 00:29:27.855 [2024-07-12 11:07:44.674335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.855 [2024-07-12 11:07:44.674365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.855 qpair failed and we were unable to recover it. 00:29:27.855 [2024-07-12 11:07:44.674798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.855 [2024-07-12 11:07:44.674827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.855 qpair failed and we were unable to recover it. 00:29:27.855 [2024-07-12 11:07:44.675256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.855 [2024-07-12 11:07:44.675285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.855 qpair failed and we were unable to recover it. 00:29:27.855 [2024-07-12 11:07:44.675726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.855 [2024-07-12 11:07:44.675754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.855 qpair failed and we were unable to recover it. 00:29:27.855 [2024-07-12 11:07:44.676175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.855 [2024-07-12 11:07:44.676206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.855 qpair failed and we were unable to recover it. 00:29:27.855 [2024-07-12 11:07:44.676647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.855 [2024-07-12 11:07:44.676674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.855 qpair failed and we were unable to recover it. 00:29:27.855 [2024-07-12 11:07:44.677140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.855 [2024-07-12 11:07:44.677170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.855 qpair failed and we were unable to recover it. 00:29:27.855 [2024-07-12 11:07:44.677639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.855 [2024-07-12 11:07:44.677667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.855 qpair failed and we were unable to recover it. 00:29:27.855 [2024-07-12 11:07:44.678096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.855 [2024-07-12 11:07:44.678135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.855 qpair failed and we were unable to recover it. 00:29:27.855 [2024-07-12 11:07:44.678551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.855 [2024-07-12 11:07:44.678579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.855 qpair failed and we were unable to recover it. 00:29:27.855 [2024-07-12 11:07:44.679078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.855 [2024-07-12 11:07:44.679106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.855 qpair failed and we were unable to recover it. 00:29:27.855 [2024-07-12 11:07:44.679545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.855 [2024-07-12 11:07:44.679573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.855 qpair failed and we were unable to recover it. 00:29:27.855 [2024-07-12 11:07:44.680002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.855 [2024-07-12 11:07:44.680030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.855 qpair failed and we were unable to recover it. 00:29:27.855 [2024-07-12 11:07:44.680382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.855 [2024-07-12 11:07:44.680412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.855 qpair failed and we were unable to recover it. 00:29:27.855 [2024-07-12 11:07:44.680886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.855 [2024-07-12 11:07:44.680914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.855 qpair failed and we were unable to recover it. 00:29:27.855 [2024-07-12 11:07:44.681425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.855 [2024-07-12 11:07:44.681527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.855 qpair failed and we were unable to recover it. 00:29:27.855 [2024-07-12 11:07:44.682019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.855 [2024-07-12 11:07:44.682060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.855 qpair failed and we were unable to recover it. 00:29:27.855 [2024-07-12 11:07:44.682498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.855 [2024-07-12 11:07:44.682529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.855 qpair failed and we were unable to recover it. 00:29:27.855 [2024-07-12 11:07:44.682843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.855 [2024-07-12 11:07:44.682876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.855 qpair failed and we were unable to recover it. 00:29:27.855 [2024-07-12 11:07:44.683301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.855 [2024-07-12 11:07:44.683331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.855 qpair failed and we were unable to recover it. 00:29:27.855 [2024-07-12 11:07:44.683740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.855 [2024-07-12 11:07:44.683769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.855 qpair failed and we were unable to recover it. 00:29:27.855 [2024-07-12 11:07:44.684193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.855 [2024-07-12 11:07:44.684245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.855 qpair failed and we were unable to recover it. 00:29:27.855 [2024-07-12 11:07:44.684670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.855 [2024-07-12 11:07:44.684698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.855 qpair failed and we were unable to recover it. 00:29:27.855 [2024-07-12 11:07:44.685133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.685163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.685586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.685614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.686045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.686074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.686502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.686530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.686937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.686964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.687279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.687312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.687736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.687767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.688193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.688227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.688664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.688694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.688973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.689001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.689529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.689576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.689903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.689932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.690240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.690270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.690709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.690737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.691147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.691175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.691601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.691629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.692058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.692087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.692669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.692698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.693144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.693186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.693641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.693673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.694086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.694114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.694595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.694624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.695081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.695110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.695438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.695466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.695924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.695957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.696279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.696310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.696788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.696816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.697229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.697258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.697679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.697707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.698136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.698166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.698586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.698614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.698791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.698823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.699251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.699281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.699602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.699630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.700063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.700090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.700600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.700630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.701059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.701087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.701348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.701377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.701804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.701838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.856 [2024-07-12 11:07:44.702152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.856 [2024-07-12 11:07:44.702182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.856 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.702611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.702639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.703076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.703104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.703570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.703599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.703933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.703961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.704486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.704589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.705107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.705165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.705606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.705636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.706059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.706087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.706527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.706558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.706985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.707013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.707429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.707459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.707893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.707922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.708404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.708507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.709006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.709041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.709370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.709401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.709859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.709888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.710317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.710347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.710788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.710815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.711264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.711294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.711609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.711637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.712079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.712107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.712445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.712474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.712784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.712812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.713242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.713271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.713727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.713755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.713996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.714025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.714302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.714333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.714772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.714800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.715212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.715241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.715676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.715705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.716119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.716171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.716506] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:27.857 [2024-07-12 11:07:44.716584] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:27.857 [2024-07-12 11:07:44.716615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.716646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.717077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.717105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.717542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.717571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.717991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.718021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.718528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.718558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.718985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.719013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.719454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.719491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.857 qpair failed and we were unable to recover it. 00:29:27.857 [2024-07-12 11:07:44.719920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.857 [2024-07-12 11:07:44.719951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.720372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.720402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.720816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.720846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.721264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.721294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.721775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.721804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.722273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.722303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.722744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.722773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.723200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.723230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.723550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.723589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.724022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.724051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.724481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.724512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.725016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.725046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.725377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.725407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.725864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.725894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.726324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.726354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.726788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.726817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.727258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.727289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.727788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.727823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.728149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.728182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.728452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.728483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.728767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.728796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.729219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.729248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.729546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.729575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.730033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.730066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.730410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.730441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.730872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.730901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.731344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.731375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.731715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.731746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.732168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.732199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.732654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.732682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.733119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.733161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.733596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.733624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.733936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.733964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.734385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.734413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.734845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.734873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.735353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.735383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.735698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.735726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.736160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.736189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.736495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.736526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.736959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.736996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.858 [2024-07-12 11:07:44.737418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.858 [2024-07-12 11:07:44.737448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.858 qpair failed and we were unable to recover it. 00:29:27.859 [2024-07-12 11:07:44.737871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.859 [2024-07-12 11:07:44.737899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.859 qpair failed and we were unable to recover it. 00:29:27.859 [2024-07-12 11:07:44.738337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.859 [2024-07-12 11:07:44.738367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.859 qpair failed and we were unable to recover it. 00:29:27.859 [2024-07-12 11:07:44.738844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.859 [2024-07-12 11:07:44.738872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.859 qpair failed and we were unable to recover it. 00:29:27.859 [2024-07-12 11:07:44.739343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.859 [2024-07-12 11:07:44.739373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.859 qpair failed and we were unable to recover it. 00:29:27.859 [2024-07-12 11:07:44.739808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.859 [2024-07-12 11:07:44.739837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.859 qpair failed and we were unable to recover it. 00:29:27.859 [2024-07-12 11:07:44.740266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.859 [2024-07-12 11:07:44.740295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.859 qpair failed and we were unable to recover it. 00:29:27.859 [2024-07-12 11:07:44.740712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.859 [2024-07-12 11:07:44.740741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.859 qpair failed and we were unable to recover it. 00:29:27.859 [2024-07-12 11:07:44.741102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.859 [2024-07-12 11:07:44.741143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.859 qpair failed and we were unable to recover it. 00:29:27.859 [2024-07-12 11:07:44.741605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.859 [2024-07-12 11:07:44.741633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.859 qpair failed and we were unable to recover it. 00:29:27.859 [2024-07-12 11:07:44.742033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.859 [2024-07-12 11:07:44.742061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.859 qpair failed and we were unable to recover it. 00:29:27.859 [2024-07-12 11:07:44.742494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.859 [2024-07-12 11:07:44.742524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.859 qpair failed and we were unable to recover it. 00:29:27.859 [2024-07-12 11:07:44.742832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.859 [2024-07-12 11:07:44.742859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.859 qpair failed and we were unable to recover it. 00:29:27.859 [2024-07-12 11:07:44.743276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.859 [2024-07-12 11:07:44.743305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.859 qpair failed and we were unable to recover it. 00:29:27.859 [2024-07-12 11:07:44.743743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.859 [2024-07-12 11:07:44.743772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.859 qpair failed and we were unable to recover it. 00:29:27.859 [2024-07-12 11:07:44.744197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.859 [2024-07-12 11:07:44.744226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.859 qpair failed and we were unable to recover it. 00:29:27.859 [2024-07-12 11:07:44.744644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.859 [2024-07-12 11:07:44.744672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.859 qpair failed and we were unable to recover it. 00:29:27.859 [2024-07-12 11:07:44.744984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.859 [2024-07-12 11:07:44.745012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.859 qpair failed and we were unable to recover it. 00:29:27.859 [2024-07-12 11:07:44.745341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.859 [2024-07-12 11:07:44.745370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.859 qpair failed and we were unable to recover it. 00:29:27.859 [2024-07-12 11:07:44.745842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.859 [2024-07-12 11:07:44.745870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.859 qpair failed and we were unable to recover it. 00:29:27.859 [2024-07-12 11:07:44.746143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.859 [2024-07-12 11:07:44.746175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.859 qpair failed and we were unable to recover it. 00:29:27.859 [2024-07-12 11:07:44.746630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.859 [2024-07-12 11:07:44.746659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.859 qpair failed and we were unable to recover it. 00:29:27.859 [2024-07-12 11:07:44.747087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.859 [2024-07-12 11:07:44.747115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.859 qpair failed and we were unable to recover it. 00:29:27.859 [2024-07-12 11:07:44.747542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.859 [2024-07-12 11:07:44.747572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.859 qpair failed and we were unable to recover it. 00:29:27.859 [2024-07-12 11:07:44.747907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.859 [2024-07-12 11:07:44.747937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.859 qpair failed and we were unable to recover it. 00:29:27.859 [2024-07-12 11:07:44.748375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.859 [2024-07-12 11:07:44.748406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.859 qpair failed and we were unable to recover it. 00:29:27.859 [2024-07-12 11:07:44.748831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.859 [2024-07-12 11:07:44.748860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.859 qpair failed and we were unable to recover it. 00:29:27.859 [2024-07-12 11:07:44.749289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.859 [2024-07-12 11:07:44.749319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.859 qpair failed and we were unable to recover it. 00:29:27.859 [2024-07-12 11:07:44.749749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.859 [2024-07-12 11:07:44.749777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.859 qpair failed and we were unable to recover it. 00:29:27.859 [2024-07-12 11:07:44.750212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.859 [2024-07-12 11:07:44.750242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.859 qpair failed and we were unable to recover it. 00:29:27.859 [2024-07-12 11:07:44.750669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.859 [2024-07-12 11:07:44.750697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.859 qpair failed and we were unable to recover it. 00:29:27.859 [2024-07-12 11:07:44.751140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.859 [2024-07-12 11:07:44.751169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.859 qpair failed and we were unable to recover it. 00:29:27.859 [2024-07-12 11:07:44.751640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.859 [2024-07-12 11:07:44.751668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.859 qpair failed and we were unable to recover it. 00:29:27.859 [2024-07-12 11:07:44.752069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.859 [2024-07-12 11:07:44.752097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.859 qpair failed and we were unable to recover it. 00:29:27.859 [2024-07-12 11:07:44.752580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.859 [2024-07-12 11:07:44.752610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.859 qpair failed and we were unable to recover it. 00:29:27.859 [2024-07-12 11:07:44.753041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.859 [2024-07-12 11:07:44.753069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.859 qpair failed and we were unable to recover it. 00:29:27.860 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.860 [2024-07-12 11:07:44.753500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.753531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.753962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.753991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.754314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.754343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.754772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.754800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.755106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.755151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.755590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.755619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.756030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.756058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.756398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.756427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.756873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.756901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.757227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.757255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.757871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.757901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.758203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.758233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.758674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.758703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.759142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.759171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.759621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.759649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.760089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.760117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.760564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.760599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.761013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.761041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.761491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.761520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.761958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.761988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.762419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.762449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.762913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.762942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.763355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.763385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.763856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.763883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.764301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.764331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.764775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.764804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.765119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.765158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.765578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.765606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.766050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.766078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.766509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.766538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.766975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.767003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.767464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.767494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.767921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.767949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.768310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.768339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.768747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.768776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.769195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.769224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.769540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.769571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.769992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.770020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.770436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.770465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.860 [2024-07-12 11:07:44.770897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.860 [2024-07-12 11:07:44.770924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.860 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.771340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.771370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.771816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.771844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.772278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.772307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.772737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.772767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.773210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.773240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.773662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.773691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.774115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.774155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.774600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.774628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.775020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.775048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.775497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.775527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.775845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.775873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.776314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.776344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.776778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.776806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.777234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.777263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.777710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.777739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.778017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.778044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.778484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.778519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.778928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.778956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.779411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.779440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.779776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.779805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.780244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.780274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.780698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.780726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.781152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.781182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.781613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.781641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.782073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.782101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.782523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.782552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.782842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.782874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.783183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.783214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.783655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.783683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.784130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.784158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.784592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.784622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.785050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.785078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.785509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.785538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.785808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.785835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.786165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.786194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.786612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.786641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.787072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.787100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.787549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.787577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.788008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.788036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.788524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.788554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.861 qpair failed and we were unable to recover it. 00:29:27.861 [2024-07-12 11:07:44.788863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.861 [2024-07-12 11:07:44.788890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.789299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.789328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.789759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.789788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.790093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.790136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.790569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.790598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.791034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.791064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.791511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.791543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.791882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.791912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.792335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.792366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.792682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.792711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.793151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.793182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.793607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.793636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.794073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.794101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.794536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.794564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.794892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.794920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.795324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.795352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.795762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.795797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.796111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.796153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.796609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.796638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.797117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.797156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.797479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.797507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.797963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.797991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.798419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.798449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.798629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.798660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.799089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.799117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.799569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.799598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.800036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.800064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.800386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.800415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.800864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.800892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.801347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.801377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.801841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.801869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.802270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.802300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.802726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.802754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.803185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.803214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.803653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.803681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.804110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.804149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.804611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.804639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.805064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.805092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.805422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.805450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.805685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.805715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.862 [2024-07-12 11:07:44.806144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.862 [2024-07-12 11:07:44.806174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.862 qpair failed and we were unable to recover it. 00:29:27.863 [2024-07-12 11:07:44.806497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.863 [2024-07-12 11:07:44.806525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.863 qpair failed and we were unable to recover it. 00:29:27.863 [2024-07-12 11:07:44.806952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.863 [2024-07-12 11:07:44.806980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.863 qpair failed and we were unable to recover it. 00:29:27.863 [2024-07-12 11:07:44.807277] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:27.863 [2024-07-12 11:07:44.807408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.863 [2024-07-12 11:07:44.807438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.863 qpair failed and we were unable to recover it. 00:29:27.863 [2024-07-12 11:07:44.807795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.863 [2024-07-12 11:07:44.807823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.863 qpair failed and we were unable to recover it. 00:29:27.863 [2024-07-12 11:07:44.808161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.863 [2024-07-12 11:07:44.808190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.863 qpair failed and we were unable to recover it. 00:29:27.863 [2024-07-12 11:07:44.808628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.863 [2024-07-12 11:07:44.808658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.863 qpair failed and we were unable to recover it. 00:29:27.863 [2024-07-12 11:07:44.809090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.863 [2024-07-12 11:07:44.809118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.863 qpair failed and we were unable to recover it. 00:29:27.863 [2024-07-12 11:07:44.809552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.863 [2024-07-12 11:07:44.809581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.863 qpair failed and we were unable to recover it. 00:29:27.863 [2024-07-12 11:07:44.810010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.863 [2024-07-12 11:07:44.810039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.863 qpair failed and we were unable to recover it. 00:29:27.863 [2024-07-12 11:07:44.810491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.863 [2024-07-12 11:07:44.810519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.863 qpair failed and we were unable to recover it. 00:29:27.863 [2024-07-12 11:07:44.810997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.863 [2024-07-12 11:07:44.811026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.863 qpair failed and we were unable to recover it. 00:29:27.863 [2024-07-12 11:07:44.811434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.863 [2024-07-12 11:07:44.811464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.863 qpair failed and we were unable to recover it. 00:29:27.863 [2024-07-12 11:07:44.811893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.863 [2024-07-12 11:07:44.811922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.863 qpair failed and we were unable to recover it. 00:29:27.863 [2024-07-12 11:07:44.812358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.863 [2024-07-12 11:07:44.812386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.863 qpair failed and we were unable to recover it. 00:29:27.863 [2024-07-12 11:07:44.812727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.863 [2024-07-12 11:07:44.812754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.863 qpair failed and we were unable to recover it. 00:29:27.863 [2024-07-12 11:07:44.813177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.863 [2024-07-12 11:07:44.813214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.863 qpair failed and we were unable to recover it. 00:29:27.863 [2024-07-12 11:07:44.813629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.863 [2024-07-12 11:07:44.813657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.863 qpair failed and we were unable to recover it. 00:29:27.863 [2024-07-12 11:07:44.814028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.863 [2024-07-12 11:07:44.814056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.863 qpair failed and we were unable to recover it. 00:29:27.863 [2024-07-12 11:07:44.814533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.863 [2024-07-12 11:07:44.814562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.863 qpair failed and we were unable to recover it. 00:29:27.863 [2024-07-12 11:07:44.814944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.863 [2024-07-12 11:07:44.814973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.863 qpair failed and we were unable to recover it. 00:29:27.863 [2024-07-12 11:07:44.815436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.863 [2024-07-12 11:07:44.815465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.863 qpair failed and we were unable to recover it. 00:29:27.863 [2024-07-12 11:07:44.815877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.863 [2024-07-12 11:07:44.815905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.863 qpair failed and we were unable to recover it. 00:29:27.863 [2024-07-12 11:07:44.816352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.863 [2024-07-12 11:07:44.816381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.863 qpair failed and we were unable to recover it. 00:29:27.863 [2024-07-12 11:07:44.816810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.863 [2024-07-12 11:07:44.816840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.863 qpair failed and we were unable to recover it. 00:29:27.863 [2024-07-12 11:07:44.817144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.863 [2024-07-12 11:07:44.817174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.863 qpair failed and we were unable to recover it. 00:29:27.863 [2024-07-12 11:07:44.817615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.863 [2024-07-12 11:07:44.817644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.863 qpair failed and we were unable to recover it. 00:29:27.863 [2024-07-12 11:07:44.818078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.863 [2024-07-12 11:07:44.818106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.863 qpair failed and we were unable to recover it. 00:29:27.863 [2024-07-12 11:07:44.818437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.863 [2024-07-12 11:07:44.818469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.863 qpair failed and we were unable to recover it. 00:29:27.863 [2024-07-12 11:07:44.818986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.863 [2024-07-12 11:07:44.819015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.863 qpair failed and we were unable to recover it. 00:29:27.863 [2024-07-12 11:07:44.819508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.863 [2024-07-12 11:07:44.819538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:27.863 qpair failed and we were unable to recover it. 00:29:28.135 [2024-07-12 11:07:44.819972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.135 [2024-07-12 11:07:44.820003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.135 qpair failed and we were unable to recover it. 00:29:28.135 [2024-07-12 11:07:44.820420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.135 [2024-07-12 11:07:44.820450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.135 qpair failed and we were unable to recover it. 00:29:28.135 [2024-07-12 11:07:44.820893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.135 [2024-07-12 11:07:44.820922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.135 qpair failed and we were unable to recover it. 00:29:28.135 [2024-07-12 11:07:44.821228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.135 [2024-07-12 11:07:44.821261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.135 qpair failed and we were unable to recover it. 00:29:28.135 [2024-07-12 11:07:44.821699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.135 [2024-07-12 11:07:44.821728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.135 qpair failed and we were unable to recover it. 00:29:28.135 [2024-07-12 11:07:44.822172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.135 [2024-07-12 11:07:44.822202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.135 qpair failed and we were unable to recover it. 00:29:28.135 [2024-07-12 11:07:44.822640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.135 [2024-07-12 11:07:44.822671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.135 qpair failed and we were unable to recover it. 00:29:28.135 [2024-07-12 11:07:44.823146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.135 [2024-07-12 11:07:44.823176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.135 qpair failed and we were unable to recover it. 00:29:28.135 [2024-07-12 11:07:44.823529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.135 [2024-07-12 11:07:44.823559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.135 qpair failed and we were unable to recover it. 00:29:28.135 [2024-07-12 11:07:44.823981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.135 [2024-07-12 11:07:44.824011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.135 qpair failed and we were unable to recover it. 00:29:28.135 [2024-07-12 11:07:44.824426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.135 [2024-07-12 11:07:44.824455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.135 qpair failed and we were unable to recover it. 00:29:28.135 [2024-07-12 11:07:44.824900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.135 [2024-07-12 11:07:44.824928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.135 qpair failed and we were unable to recover it. 00:29:28.135 [2024-07-12 11:07:44.825220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.135 [2024-07-12 11:07:44.825251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.135 qpair failed and we were unable to recover it. 00:29:28.135 [2024-07-12 11:07:44.825719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.135 [2024-07-12 11:07:44.825748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.135 qpair failed and we were unable to recover it. 00:29:28.135 [2024-07-12 11:07:44.826196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.135 [2024-07-12 11:07:44.826226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.135 qpair failed and we were unable to recover it. 00:29:28.135 [2024-07-12 11:07:44.826656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.135 [2024-07-12 11:07:44.826686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.135 qpair failed and we were unable to recover it. 00:29:28.135 [2024-07-12 11:07:44.827012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.135 [2024-07-12 11:07:44.827040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.135 qpair failed and we were unable to recover it. 00:29:28.135 [2024-07-12 11:07:44.827469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.135 [2024-07-12 11:07:44.827499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.135 qpair failed and we were unable to recover it. 00:29:28.135 [2024-07-12 11:07:44.827683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.135 [2024-07-12 11:07:44.827715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.135 qpair failed and we were unable to recover it. 00:29:28.135 [2024-07-12 11:07:44.828192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.135 [2024-07-12 11:07:44.828222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.135 qpair failed and we were unable to recover it. 00:29:28.135 [2024-07-12 11:07:44.828653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.135 [2024-07-12 11:07:44.828682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.135 qpair failed and we were unable to recover it. 00:29:28.135 [2024-07-12 11:07:44.828999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.135 [2024-07-12 11:07:44.829029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.135 qpair failed and we were unable to recover it. 00:29:28.135 [2024-07-12 11:07:44.829471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.135 [2024-07-12 11:07:44.829502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.135 qpair failed and we were unable to recover it. 00:29:28.135 [2024-07-12 11:07:44.829909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.135 [2024-07-12 11:07:44.829938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.135 qpair failed and we were unable to recover it. 00:29:28.135 [2024-07-12 11:07:44.830372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.135 [2024-07-12 11:07:44.830401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.135 qpair failed and we were unable to recover it. 00:29:28.135 [2024-07-12 11:07:44.830827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.135 [2024-07-12 11:07:44.830863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.135 qpair failed and we were unable to recover it. 00:29:28.135 [2024-07-12 11:07:44.831274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.135 [2024-07-12 11:07:44.831304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.135 qpair failed and we were unable to recover it. 00:29:28.135 [2024-07-12 11:07:44.831610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.135 [2024-07-12 11:07:44.831641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.135 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.832035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.832064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.832526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.832557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.832999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.833027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.833295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.833324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.833762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.833790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.834225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.834254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.834656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.834684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.835118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.835162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.835555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.835584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.836055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.836084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.836583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.836612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.837038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.837067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.837396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.837426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.837851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.837880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.838317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.838347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.838615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.838644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.839062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.839094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.839469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.839499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.839911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.839940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.840388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.840418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.840852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.840880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.841304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.841333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.841755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.841783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.842144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.842173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.842609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.842638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.843079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.843107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.843428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.843459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.843893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.843922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.844391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.844420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.844931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.844960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.845332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.845360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.845691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.845719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.846153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.846183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.846617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.846645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.847077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.847105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.847555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.847583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.848011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.848041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.848345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.848382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.848793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.136 [2024-07-12 11:07:44.848821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.136 qpair failed and we were unable to recover it. 00:29:28.136 [2024-07-12 11:07:44.849264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.849294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.849741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.849769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.850247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.850277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.850679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.850708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.851143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.851173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.851475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.851503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.851922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.851950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.852395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.852424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.852731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.852759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.853177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.853207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.853621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.853650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.853938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.853968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.854286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.854318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.854790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.854819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.855071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.855100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.855427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.855457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.855897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.855927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.856359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.856390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.856813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.856841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.857277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.857307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.857752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.857780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.858217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.858248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.858650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.858681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.859107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.859147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.859599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.859629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.860083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.860113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.860602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.860633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.860974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.861004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.861400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.861429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.861726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.861759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.862211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.862241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.862674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.862704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.863142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.863171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.863618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.863648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.864079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.864108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.864573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.864601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.865027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.865055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.865339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.865368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.865814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.865853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.137 qpair failed and we were unable to recover it. 00:29:28.137 [2024-07-12 11:07:44.866199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.137 [2024-07-12 11:07:44.866230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.866522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.866550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.866986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.867014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.867353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.867382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.867789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.867817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.868253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.868281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.868731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.868759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.869187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.869216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.869593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.869621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.870101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.870142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.870614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.870643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.870903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.870931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.871364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.871394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.871802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.871831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.872268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.872298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.872685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.872714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.873116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.873166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.873599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.873628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.874061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.874090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.874342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.874370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.874781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.874808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.875249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.875279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.875694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.875722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.876149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.876178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.876612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.876640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.877068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.877099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.877551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.877580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.877892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.877921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.878330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.878360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.878791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.878819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.879226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.879256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.879676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.879704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.880145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.880174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.880600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.880629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.880882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.880910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.881341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.881370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.881797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.881825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.882250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.882279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.138 [2024-07-12 11:07:44.882710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.138 [2024-07-12 11:07:44.882738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.138 qpair failed and we were unable to recover it. 00:29:28.139 [2024-07-12 11:07:44.883160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.139 [2024-07-12 11:07:44.883198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.139 qpair failed and we were unable to recover it. 00:29:28.139 [2024-07-12 11:07:44.883621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.139 [2024-07-12 11:07:44.883649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.139 qpair failed and we were unable to recover it. 00:29:28.139 [2024-07-12 11:07:44.884076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.139 [2024-07-12 11:07:44.884107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.139 qpair failed and we were unable to recover it. 00:29:28.139 [2024-07-12 11:07:44.884567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.139 [2024-07-12 11:07:44.884596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.139 qpair failed and we were unable to recover it. 00:29:28.139 [2024-07-12 11:07:44.884934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.139 [2024-07-12 11:07:44.884962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.139 qpair failed and we were unable to recover it. 00:29:28.139 [2024-07-12 11:07:44.885369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.139 [2024-07-12 11:07:44.885399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.139 qpair failed and we were unable to recover it. 00:29:28.139 [2024-07-12 11:07:44.885817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.139 [2024-07-12 11:07:44.885846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.139 qpair failed and we were unable to recover it. 00:29:28.139 [2024-07-12 11:07:44.886250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.139 [2024-07-12 11:07:44.886278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.139 qpair failed and we were unable to recover it. 00:29:28.139 [2024-07-12 11:07:44.886669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.139 [2024-07-12 11:07:44.886697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.139 qpair failed and we were unable to recover it. 00:29:28.139 [2024-07-12 11:07:44.887141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.139 [2024-07-12 11:07:44.887171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.139 qpair failed and we were unable to recover it. 00:29:28.139 [2024-07-12 11:07:44.887603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.139 [2024-07-12 11:07:44.887631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.139 qpair failed and we were unable to recover it. 00:29:28.139 [2024-07-12 11:07:44.887937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.139 [2024-07-12 11:07:44.887965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.139 qpair failed and we were unable to recover it. 00:29:28.139 [2024-07-12 11:07:44.888300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.139 [2024-07-12 11:07:44.888330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.139 qpair failed and we were unable to recover it. 00:29:28.139 [2024-07-12 11:07:44.888813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.139 [2024-07-12 11:07:44.888843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.139 qpair failed and we were unable to recover it. 00:29:28.139 [2024-07-12 11:07:44.889276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.139 [2024-07-12 11:07:44.889306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.139 qpair failed and we were unable to recover it. 00:29:28.139 [2024-07-12 11:07:44.889716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.139 [2024-07-12 11:07:44.889744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.139 qpair failed and we were unable to recover it. 00:29:28.139 [2024-07-12 11:07:44.890180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.139 [2024-07-12 11:07:44.890208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.139 qpair failed and we were unable to recover it. 00:29:28.139 [2024-07-12 11:07:44.890639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.139 [2024-07-12 11:07:44.890667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.139 qpair failed and we were unable to recover it. 00:29:28.139 [2024-07-12 11:07:44.891002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.139 [2024-07-12 11:07:44.891030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.139 qpair failed and we were unable to recover it. 00:29:28.139 [2024-07-12 11:07:44.891455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.139 [2024-07-12 11:07:44.891483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.139 qpair failed and we were unable to recover it. 00:29:28.139 [2024-07-12 11:07:44.891916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.139 [2024-07-12 11:07:44.891948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.139 qpair failed and we were unable to recover it. 00:29:28.139 [2024-07-12 11:07:44.892362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.139 [2024-07-12 11:07:44.892392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.139 qpair failed and we were unable to recover it. 00:29:28.139 [2024-07-12 11:07:44.892821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.139 [2024-07-12 11:07:44.892851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.139 qpair failed and we were unable to recover it. 00:29:28.139 [2024-07-12 11:07:44.893276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.139 [2024-07-12 11:07:44.893304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.139 qpair failed and we were unable to recover it. 00:29:28.139 [2024-07-12 11:07:44.893713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.139 [2024-07-12 11:07:44.893741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.139 qpair failed and we were unable to recover it. 00:29:28.139 [2024-07-12 11:07:44.894058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.139 [2024-07-12 11:07:44.894087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.139 qpair failed and we were unable to recover it. 00:29:28.139 [2024-07-12 11:07:44.894336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.139 [2024-07-12 11:07:44.894364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.139 qpair failed and we were unable to recover it. 00:29:28.139 [2024-07-12 11:07:44.894809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.139 [2024-07-12 11:07:44.894838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.140 qpair failed and we were unable to recover it. 00:29:28.140 [2024-07-12 11:07:44.895266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.140 [2024-07-12 11:07:44.895296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.140 qpair failed and we were unable to recover it. 00:29:28.140 [2024-07-12 11:07:44.895738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.140 [2024-07-12 11:07:44.895768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.140 qpair failed and we were unable to recover it. 00:29:28.140 [2024-07-12 11:07:44.896181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.140 [2024-07-12 11:07:44.896211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.140 qpair failed and we were unable to recover it. 00:29:28.140 [2024-07-12 11:07:44.896532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.140 [2024-07-12 11:07:44.896561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.140 qpair failed and we were unable to recover it. 00:29:28.140 [2024-07-12 11:07:44.896994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.140 [2024-07-12 11:07:44.897023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.140 qpair failed and we were unable to recover it. 00:29:28.140 [2024-07-12 11:07:44.897436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.140 [2024-07-12 11:07:44.897466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.140 qpair failed and we were unable to recover it. 00:29:28.140 [2024-07-12 11:07:44.897901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.140 [2024-07-12 11:07:44.897931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.140 qpair failed and we were unable to recover it. 00:29:28.140 [2024-07-12 11:07:44.898431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.140 [2024-07-12 11:07:44.898460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.140 qpair failed and we were unable to recover it. 00:29:28.140 [2024-07-12 11:07:44.898779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.140 [2024-07-12 11:07:44.898807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.140 qpair failed and we were unable to recover it. 00:29:28.140 [2024-07-12 11:07:44.899119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.140 [2024-07-12 11:07:44.899158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.140 qpair failed and we were unable to recover it. 00:29:28.140 [2024-07-12 11:07:44.899660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.140 [2024-07-12 11:07:44.899689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.140 qpair failed and we were unable to recover it. 00:29:28.140 [2024-07-12 11:07:44.900118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.140 [2024-07-12 11:07:44.900168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.140 qpair failed and we were unable to recover it. 00:29:28.140 [2024-07-12 11:07:44.900591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.140 [2024-07-12 11:07:44.900627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.140 qpair failed and we were unable to recover it. 00:29:28.140 [2024-07-12 11:07:44.900964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.140 [2024-07-12 11:07:44.900993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.140 qpair failed and we were unable to recover it. 00:29:28.140 [2024-07-12 11:07:44.901425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.140 [2024-07-12 11:07:44.901454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.140 qpair failed and we were unable to recover it. 00:29:28.140 [2024-07-12 11:07:44.901780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.140 [2024-07-12 11:07:44.901808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.140 qpair failed and we were unable to recover it. 00:29:28.140 [2024-07-12 11:07:44.902244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.140 [2024-07-12 11:07:44.902273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.140 qpair failed and we were unable to recover it. 00:29:28.140 [2024-07-12 11:07:44.902717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.140 [2024-07-12 11:07:44.902745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.140 qpair failed and we were unable to recover it. 00:29:28.140 [2024-07-12 11:07:44.903180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.140 [2024-07-12 11:07:44.903208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.140 qpair failed and we were unable to recover it. 00:29:28.140 [2024-07-12 11:07:44.903637] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:28.140 [2024-07-12 11:07:44.903656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.140 [2024-07-12 11:07:44.903689] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:28.140 [2024-07-12 11:07:44.903691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b9[2024-07-12 11:07:44.903698] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the0 with addr=10.0.0.2, port=4420 00:29:28.140 only 00:29:28.140 [2024-07-12 11:07:44.903710] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:28.140 [2024-07-12 11:07:44.903717] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:28.140 qpair failed and we were unable to recover it. 00:29:28.140 [2024-07-12 11:07:44.903895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:28.140 [2024-07-12 11:07:44.904099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.140 [2024-07-12 11:07:44.904137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.140 qpair failed and we were unable to recover it. 00:29:28.140 [2024-07-12 11:07:44.904113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:28.140 [2024-07-12 11:07:44.904284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:28.140 [2024-07-12 11:07:44.904284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:28.140 [2024-07-12 11:07:44.904562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.140 [2024-07-12 11:07:44.904591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.140 qpair failed and we were unable to recover it. 00:29:28.140 [2024-07-12 11:07:44.905031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.140 [2024-07-12 11:07:44.905065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.140 qpair failed and we were unable to recover it. 00:29:28.140 [2024-07-12 11:07:44.905476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.140 [2024-07-12 11:07:44.905506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.140 qpair failed and we were unable to recover it. 00:29:28.140 [2024-07-12 11:07:44.905825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.140 [2024-07-12 11:07:44.905852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.140 qpair failed and we were unable to recover it. 00:29:28.140 [2024-07-12 11:07:44.906289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.140 [2024-07-12 11:07:44.906318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.140 qpair failed and we were unable to recover it. 00:29:28.140 [2024-07-12 11:07:44.906749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.140 [2024-07-12 11:07:44.906776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.140 qpair failed and we were unable to recover it. 00:29:28.140 [2024-07-12 11:07:44.907206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.140 [2024-07-12 11:07:44.907235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.140 qpair failed and we were unable to recover it. 00:29:28.140 [2024-07-12 11:07:44.907691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.140 [2024-07-12 11:07:44.907718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.140 qpair failed and we were unable to recover it. 00:29:28.140 [2024-07-12 11:07:44.907996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.140 [2024-07-12 11:07:44.908024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.140 qpair failed and we were unable to recover it. 00:29:28.140 [2024-07-12 11:07:44.908446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.140 [2024-07-12 11:07:44.908474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.140 qpair failed and we were unable to recover it. 00:29:28.140 [2024-07-12 11:07:44.908917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.140 [2024-07-12 11:07:44.908945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.140 qpair failed and we were unable to recover it. 00:29:28.140 [2024-07-12 11:07:44.909381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.140 [2024-07-12 11:07:44.909411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.140 qpair failed and we were unable to recover it. 00:29:28.140 [2024-07-12 11:07:44.909719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.140 [2024-07-12 11:07:44.909751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.140 qpair failed and we were unable to recover it. 00:29:28.140 [2024-07-12 11:07:44.910182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.140 [2024-07-12 11:07:44.910212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.140 qpair failed and we were unable to recover it. 00:29:28.141 [2024-07-12 11:07:44.910500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.141 [2024-07-12 11:07:44.910528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.141 qpair failed and we were unable to recover it. 00:29:28.141 [2024-07-12 11:07:44.910854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.141 [2024-07-12 11:07:44.910882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.141 qpair failed and we were unable to recover it. 00:29:28.141 [2024-07-12 11:07:44.911221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.141 [2024-07-12 11:07:44.911294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.141 qpair failed and we were unable to recover it. 00:29:28.141 [2024-07-12 11:07:44.911736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.141 [2024-07-12 11:07:44.911764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.141 qpair failed and we were unable to recover it. 00:29:28.141 [2024-07-12 11:07:44.912081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.141 [2024-07-12 11:07:44.912109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.141 qpair failed and we were unable to recover it. 00:29:28.141 [2024-07-12 11:07:44.912353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.141 [2024-07-12 11:07:44.912381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.141 qpair failed and we were unable to recover it. 00:29:28.141 [2024-07-12 11:07:44.912865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.141 [2024-07-12 11:07:44.912893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.141 qpair failed and we were unable to recover it. 00:29:28.141 [2024-07-12 11:07:44.913301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.141 [2024-07-12 11:07:44.913330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.141 qpair failed and we were unable to recover it. 00:29:28.141 [2024-07-12 11:07:44.913764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.141 [2024-07-12 11:07:44.913791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.141 qpair failed and we were unable to recover it. 00:29:28.141 [2024-07-12 11:07:44.914225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.141 [2024-07-12 11:07:44.914254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.141 qpair failed and we were unable to recover it. 00:29:28.141 [2024-07-12 11:07:44.914663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.141 [2024-07-12 11:07:44.914691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.141 qpair failed and we were unable to recover it. 00:29:28.141 [2024-07-12 11:07:44.915134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.141 [2024-07-12 11:07:44.915163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.141 qpair failed and we were unable to recover it. 00:29:28.141 [2024-07-12 11:07:44.915487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.141 [2024-07-12 11:07:44.915518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.141 qpair failed and we were unable to recover it. 00:29:28.141 [2024-07-12 11:07:44.915991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.141 [2024-07-12 11:07:44.916019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.141 qpair failed and we were unable to recover it. 00:29:28.141 [2024-07-12 11:07:44.916441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.141 [2024-07-12 11:07:44.916477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.141 qpair failed and we were unable to recover it. 00:29:28.141 [2024-07-12 11:07:44.916892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.141 [2024-07-12 11:07:44.916920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.141 qpair failed and we were unable to recover it. 00:29:28.141 [2024-07-12 11:07:44.917354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.141 [2024-07-12 11:07:44.917383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.141 qpair failed and we were unable to recover it. 00:29:28.141 [2024-07-12 11:07:44.917827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.141 [2024-07-12 11:07:44.917855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.141 qpair failed and we were unable to recover it. 00:29:28.141 [2024-07-12 11:07:44.918170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.141 [2024-07-12 11:07:44.918199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.141 qpair failed and we were unable to recover it. 00:29:28.141 [2024-07-12 11:07:44.918649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.141 [2024-07-12 11:07:44.918678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.141 qpair failed and we were unable to recover it. 00:29:28.141 [2024-07-12 11:07:44.919075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.141 [2024-07-12 11:07:44.919103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.141 qpair failed and we were unable to recover it. 00:29:28.141 [2024-07-12 11:07:44.919315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.141 [2024-07-12 11:07:44.919344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.141 qpair failed and we were unable to recover it. 00:29:28.141 [2024-07-12 11:07:44.919730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.141 [2024-07-12 11:07:44.919758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.141 qpair failed and we were unable to recover it. 00:29:28.141 [2024-07-12 11:07:44.920204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.141 [2024-07-12 11:07:44.920233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.141 qpair failed and we were unable to recover it. 00:29:28.141 [2024-07-12 11:07:44.920656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.141 [2024-07-12 11:07:44.920683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.141 qpair failed and we were unable to recover it. 00:29:28.141 [2024-07-12 11:07:44.921005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.141 [2024-07-12 11:07:44.921032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.141 qpair failed and we were unable to recover it. 00:29:28.141 [2024-07-12 11:07:44.921439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.141 [2024-07-12 11:07:44.921468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.141 qpair failed and we were unable to recover it. 00:29:28.141 [2024-07-12 11:07:44.921688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.141 [2024-07-12 11:07:44.921717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.141 qpair failed and we were unable to recover it. 00:29:28.141 [2024-07-12 11:07:44.922191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.141 [2024-07-12 11:07:44.922221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.141 qpair failed and we were unable to recover it. 00:29:28.141 [2024-07-12 11:07:44.922700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.141 [2024-07-12 11:07:44.922728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.141 qpair failed and we were unable to recover it. 00:29:28.141 [2024-07-12 11:07:44.923157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.141 [2024-07-12 11:07:44.923187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.141 qpair failed and we were unable to recover it. 00:29:28.141 [2024-07-12 11:07:44.923617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.141 [2024-07-12 11:07:44.923644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.141 qpair failed and we were unable to recover it. 00:29:28.141 [2024-07-12 11:07:44.923926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.141 [2024-07-12 11:07:44.923954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.141 qpair failed and we were unable to recover it. 00:29:28.141 [2024-07-12 11:07:44.924432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.141 [2024-07-12 11:07:44.924461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.141 qpair failed and we were unable to recover it. 00:29:28.141 [2024-07-12 11:07:44.924882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.141 [2024-07-12 11:07:44.924911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.141 qpair failed and we were unable to recover it. 00:29:28.141 [2024-07-12 11:07:44.925335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.925367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.925819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.925847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.926278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.926308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.926755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.926782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.927071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.927102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.927437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.927467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.927879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.927908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.928337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.928368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.928625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.928654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.929089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.929117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.929556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.929585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.930022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.930055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.930493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.930523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.930962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.930990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.931416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.931445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.931866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.931896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.932329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.932360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.932778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.932807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.933119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.933190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.933656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.933692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.934110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.934151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.934555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.934583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.935010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.935039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.935302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.935332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.935739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.935768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.936082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.936111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.936540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.936569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.936875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.936907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.937216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.937246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.937578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.937606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.938023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.938052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.938483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.938513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.938763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.938792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.939302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.939332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.939736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.939764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.940196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.940226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.940651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.940680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.941119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.941162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.941429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.941458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.142 [2024-07-12 11:07:44.941761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.142 [2024-07-12 11:07:44.941791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.142 qpair failed and we were unable to recover it. 00:29:28.143 [2024-07-12 11:07:44.942201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.143 [2024-07-12 11:07:44.942230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.143 qpair failed and we were unable to recover it. 00:29:28.143 [2024-07-12 11:07:44.942659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.143 [2024-07-12 11:07:44.942687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.143 qpair failed and we were unable to recover it. 00:29:28.143 [2024-07-12 11:07:44.943113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.143 [2024-07-12 11:07:44.943154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.143 qpair failed and we were unable to recover it. 00:29:28.143 [2024-07-12 11:07:44.943578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.143 [2024-07-12 11:07:44.943606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.143 qpair failed and we were unable to recover it. 00:29:28.143 [2024-07-12 11:07:44.944033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.143 [2024-07-12 11:07:44.944063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.143 qpair failed and we were unable to recover it. 00:29:28.143 [2024-07-12 11:07:44.944315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.143 [2024-07-12 11:07:44.944346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.143 qpair failed and we were unable to recover it. 00:29:28.143 [2024-07-12 11:07:44.944758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.143 [2024-07-12 11:07:44.944789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.143 qpair failed and we were unable to recover it. 00:29:28.143 [2024-07-12 11:07:44.945159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.143 [2024-07-12 11:07:44.945190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.143 qpair failed and we were unable to recover it. 00:29:28.143 [2024-07-12 11:07:44.945619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.143 [2024-07-12 11:07:44.945647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.143 qpair failed and we were unable to recover it. 00:29:28.143 [2024-07-12 11:07:44.946072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.143 [2024-07-12 11:07:44.946101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.143 qpair failed and we were unable to recover it. 00:29:28.143 [2024-07-12 11:07:44.946553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.143 [2024-07-12 11:07:44.946583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.143 qpair failed and we were unable to recover it. 00:29:28.143 [2024-07-12 11:07:44.946815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.143 [2024-07-12 11:07:44.946843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.143 qpair failed and we were unable to recover it. 00:29:28.143 [2024-07-12 11:07:44.947198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.143 [2024-07-12 11:07:44.947230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.143 qpair failed and we were unable to recover it. 00:29:28.143 [2024-07-12 11:07:44.947639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.143 [2024-07-12 11:07:44.947667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.143 qpair failed and we were unable to recover it. 00:29:28.143 [2024-07-12 11:07:44.948071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.143 [2024-07-12 11:07:44.948099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.143 qpair failed and we were unable to recover it. 00:29:28.143 [2024-07-12 11:07:44.948384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.143 [2024-07-12 11:07:44.948414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.143 qpair failed and we were unable to recover it. 00:29:28.143 [2024-07-12 11:07:44.948671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.143 [2024-07-12 11:07:44.948699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.143 qpair failed and we were unable to recover it. 00:29:28.143 [2024-07-12 11:07:44.949173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.143 [2024-07-12 11:07:44.949203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.143 qpair failed and we were unable to recover it. 00:29:28.143 [2024-07-12 11:07:44.949644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.143 [2024-07-12 11:07:44.949671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.143 qpair failed and we were unable to recover it. 00:29:28.143 [2024-07-12 11:07:44.950166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.143 [2024-07-12 11:07:44.950202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.143 qpair failed and we were unable to recover it. 00:29:28.143 [2024-07-12 11:07:44.950662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.143 [2024-07-12 11:07:44.950690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.143 qpair failed and we were unable to recover it. 00:29:28.143 [2024-07-12 11:07:44.951088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.143 [2024-07-12 11:07:44.951116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.143 qpair failed and we were unable to recover it. 00:29:28.143 [2024-07-12 11:07:44.951564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.143 [2024-07-12 11:07:44.951592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.143 qpair failed and we were unable to recover it. 00:29:28.143 [2024-07-12 11:07:44.952028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.143 [2024-07-12 11:07:44.952056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.143 qpair failed and we were unable to recover it. 00:29:28.143 [2024-07-12 11:07:44.952494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.143 [2024-07-12 11:07:44.952524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.143 qpair failed and we were unable to recover it. 00:29:28.143 [2024-07-12 11:07:44.952955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.143 [2024-07-12 11:07:44.952983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.143 qpair failed and we were unable to recover it. 00:29:28.143 [2024-07-12 11:07:44.953304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.143 [2024-07-12 11:07:44.953334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.143 qpair failed and we were unable to recover it. 00:29:28.143 [2024-07-12 11:07:44.953761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.143 [2024-07-12 11:07:44.953789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.143 qpair failed and we were unable to recover it. 00:29:28.143 [2024-07-12 11:07:44.954199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.143 [2024-07-12 11:07:44.954227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.143 qpair failed and we were unable to recover it. 00:29:28.143 [2024-07-12 11:07:44.954464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.143 [2024-07-12 11:07:44.954492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.143 qpair failed and we were unable to recover it. 00:29:28.143 [2024-07-12 11:07:44.954980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.143 [2024-07-12 11:07:44.955008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.143 qpair failed and we were unable to recover it. 00:29:28.143 [2024-07-12 11:07:44.955385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.143 [2024-07-12 11:07:44.955414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.143 qpair failed and we were unable to recover it. 00:29:28.143 [2024-07-12 11:07:44.955660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.143 [2024-07-12 11:07:44.955689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.143 qpair failed and we were unable to recover it. 00:29:28.143 [2024-07-12 11:07:44.956160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.143 [2024-07-12 11:07:44.956189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.143 qpair failed and we were unable to recover it. 00:29:28.143 [2024-07-12 11:07:44.956438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.143 [2024-07-12 11:07:44.956465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.143 qpair failed and we were unable to recover it. 00:29:28.143 [2024-07-12 11:07:44.956864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.143 [2024-07-12 11:07:44.956891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.144 qpair failed and we were unable to recover it. 00:29:28.144 [2024-07-12 11:07:44.957299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.144 [2024-07-12 11:07:44.957328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.144 qpair failed and we were unable to recover it. 00:29:28.144 [2024-07-12 11:07:44.957806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.144 [2024-07-12 11:07:44.957834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.144 qpair failed and we were unable to recover it. 00:29:28.144 [2024-07-12 11:07:44.958190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.144 [2024-07-12 11:07:44.958219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.144 qpair failed and we were unable to recover it. 00:29:28.144 [2024-07-12 11:07:44.958658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.144 [2024-07-12 11:07:44.958687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.144 qpair failed and we were unable to recover it. 00:29:28.144 [2024-07-12 11:07:44.959014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.144 [2024-07-12 11:07:44.959042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.144 qpair failed and we were unable to recover it. 00:29:28.144 [2024-07-12 11:07:44.959457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.144 [2024-07-12 11:07:44.959486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.144 qpair failed and we were unable to recover it. 00:29:28.144 [2024-07-12 11:07:44.959972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.144 [2024-07-12 11:07:44.960000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.144 qpair failed and we were unable to recover it. 00:29:28.144 [2024-07-12 11:07:44.960262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.144 [2024-07-12 11:07:44.960291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.144 qpair failed and we were unable to recover it. 00:29:28.144 [2024-07-12 11:07:44.960620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.144 [2024-07-12 11:07:44.960647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.144 qpair failed and we were unable to recover it. 00:29:28.144 [2024-07-12 11:07:44.961060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.144 [2024-07-12 11:07:44.961088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.144 qpair failed and we were unable to recover it. 00:29:28.144 [2024-07-12 11:07:44.961533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.144 [2024-07-12 11:07:44.961564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.144 qpair failed and we were unable to recover it. 00:29:28.144 [2024-07-12 11:07:44.961969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.144 [2024-07-12 11:07:44.961998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.144 qpair failed and we were unable to recover it. 00:29:28.144 [2024-07-12 11:07:44.962308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.144 [2024-07-12 11:07:44.962337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.144 qpair failed and we were unable to recover it. 00:29:28.144 [2024-07-12 11:07:44.962787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.144 [2024-07-12 11:07:44.962814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.144 qpair failed and we were unable to recover it. 00:29:28.144 [2024-07-12 11:07:44.963232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.144 [2024-07-12 11:07:44.963261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.144 qpair failed and we were unable to recover it. 00:29:28.144 [2024-07-12 11:07:44.963578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.144 [2024-07-12 11:07:44.963606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.144 qpair failed and we were unable to recover it. 00:29:28.144 [2024-07-12 11:07:44.964034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.144 [2024-07-12 11:07:44.964062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.144 qpair failed and we were unable to recover it. 00:29:28.144 [2024-07-12 11:07:44.964365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.144 [2024-07-12 11:07:44.964394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.144 qpair failed and we were unable to recover it. 00:29:28.144 [2024-07-12 11:07:44.964804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.144 [2024-07-12 11:07:44.964832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.144 qpair failed and we were unable to recover it. 00:29:28.144 [2024-07-12 11:07:44.965079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.144 [2024-07-12 11:07:44.965106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.144 qpair failed and we were unable to recover it. 00:29:28.144 [2024-07-12 11:07:44.965549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.144 [2024-07-12 11:07:44.965578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.144 qpair failed and we were unable to recover it. 00:29:28.144 [2024-07-12 11:07:44.966008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.144 [2024-07-12 11:07:44.966036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.144 qpair failed and we were unable to recover it. 00:29:28.144 [2024-07-12 11:07:44.966282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.144 [2024-07-12 11:07:44.966311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.144 qpair failed and we were unable to recover it. 00:29:28.144 [2024-07-12 11:07:44.966708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.144 [2024-07-12 11:07:44.966744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.144 qpair failed and we were unable to recover it. 00:29:28.144 [2024-07-12 11:07:44.967185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.144 [2024-07-12 11:07:44.967214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.144 qpair failed and we were unable to recover it. 00:29:28.144 [2024-07-12 11:07:44.967540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.144 [2024-07-12 11:07:44.967569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.144 qpair failed and we were unable to recover it. 00:29:28.144 [2024-07-12 11:07:44.967878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.144 [2024-07-12 11:07:44.967909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.144 qpair failed and we were unable to recover it. 00:29:28.144 [2024-07-12 11:07:44.968322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.144 [2024-07-12 11:07:44.968351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.144 qpair failed and we were unable to recover it. 00:29:28.144 [2024-07-12 11:07:44.968781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.144 [2024-07-12 11:07:44.968809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.144 qpair failed and we were unable to recover it. 00:29:28.144 [2024-07-12 11:07:44.969233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.144 [2024-07-12 11:07:44.969262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.144 qpair failed and we were unable to recover it. 00:29:28.144 [2024-07-12 11:07:44.969690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.144 [2024-07-12 11:07:44.969718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.144 qpair failed and we were unable to recover it. 00:29:28.144 [2024-07-12 11:07:44.970028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.144 [2024-07-12 11:07:44.970057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.144 qpair failed and we were unable to recover it. 00:29:28.144 [2024-07-12 11:07:44.970487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.144 [2024-07-12 11:07:44.970516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.145 qpair failed and we were unable to recover it. 00:29:28.145 [2024-07-12 11:07:44.970916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.145 [2024-07-12 11:07:44.970943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.145 qpair failed and we were unable to recover it. 00:29:28.145 [2024-07-12 11:07:44.971351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.145 [2024-07-12 11:07:44.971379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.145 qpair failed and we were unable to recover it. 00:29:28.145 [2024-07-12 11:07:44.971822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.145 [2024-07-12 11:07:44.971850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.145 qpair failed and we were unable to recover it. 00:29:28.145 [2024-07-12 11:07:44.972265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.145 [2024-07-12 11:07:44.972294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.145 qpair failed and we were unable to recover it. 00:29:28.145 [2024-07-12 11:07:44.972699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.145 [2024-07-12 11:07:44.972727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.145 qpair failed and we were unable to recover it. 00:29:28.145 [2024-07-12 11:07:44.972977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.145 [2024-07-12 11:07:44.973005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.145 qpair failed and we were unable to recover it. 00:29:28.145 [2024-07-12 11:07:44.973531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.145 [2024-07-12 11:07:44.973561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.145 qpair failed and we were unable to recover it. 00:29:28.145 [2024-07-12 11:07:44.974000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.145 [2024-07-12 11:07:44.974028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.145 qpair failed and we were unable to recover it. 00:29:28.145 [2024-07-12 11:07:44.974445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.145 [2024-07-12 11:07:44.974474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.145 qpair failed and we were unable to recover it. 00:29:28.145 [2024-07-12 11:07:44.974707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.145 [2024-07-12 11:07:44.974735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.145 qpair failed and we were unable to recover it. 00:29:28.145 [2024-07-12 11:07:44.975013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.145 [2024-07-12 11:07:44.975040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.145 qpair failed and we were unable to recover it. 00:29:28.145 [2024-07-12 11:07:44.975454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.145 [2024-07-12 11:07:44.975483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.145 qpair failed and we were unable to recover it. 00:29:28.145 [2024-07-12 11:07:44.975878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.145 [2024-07-12 11:07:44.975915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.145 qpair failed and we were unable to recover it. 00:29:28.145 [2024-07-12 11:07:44.976152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.145 [2024-07-12 11:07:44.976182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.145 qpair failed and we were unable to recover it. 00:29:28.145 [2024-07-12 11:07:44.976589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.145 [2024-07-12 11:07:44.976617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.145 qpair failed and we were unable to recover it. 00:29:28.145 [2024-07-12 11:07:44.977133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.145 [2024-07-12 11:07:44.977162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.145 qpair failed and we were unable to recover it. 00:29:28.145 [2024-07-12 11:07:44.977456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.145 [2024-07-12 11:07:44.977483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.145 qpair failed and we were unable to recover it. 00:29:28.145 [2024-07-12 11:07:44.977910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.145 [2024-07-12 11:07:44.977938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.145 qpair failed and we were unable to recover it. 00:29:28.145 [2024-07-12 11:07:44.978368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.145 [2024-07-12 11:07:44.978397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.145 qpair failed and we were unable to recover it. 00:29:28.145 [2024-07-12 11:07:44.978642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.145 [2024-07-12 11:07:44.978669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.145 qpair failed and we were unable to recover it. 00:29:28.145 [2024-07-12 11:07:44.978963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.145 [2024-07-12 11:07:44.978993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.145 qpair failed and we were unable to recover it. 00:29:28.145 [2024-07-12 11:07:44.979413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.145 [2024-07-12 11:07:44.979443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.145 qpair failed and we were unable to recover it. 00:29:28.145 [2024-07-12 11:07:44.979942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.145 [2024-07-12 11:07:44.979970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.145 qpair failed and we were unable to recover it. 00:29:28.145 [2024-07-12 11:07:44.980266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.145 [2024-07-12 11:07:44.980298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.145 qpair failed and we were unable to recover it. 00:29:28.145 [2024-07-12 11:07:44.980741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.145 [2024-07-12 11:07:44.980769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.145 qpair failed and we were unable to recover it. 00:29:28.145 [2024-07-12 11:07:44.981076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.145 [2024-07-12 11:07:44.981106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.145 qpair failed and we were unable to recover it. 00:29:28.145 [2024-07-12 11:07:44.981415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.145 [2024-07-12 11:07:44.981444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.145 qpair failed and we were unable to recover it. 00:29:28.145 [2024-07-12 11:07:44.981873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.145 [2024-07-12 11:07:44.981900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.145 qpair failed and we were unable to recover it. 00:29:28.145 [2024-07-12 11:07:44.982312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.145 [2024-07-12 11:07:44.982341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.145 qpair failed and we were unable to recover it. 00:29:28.145 [2024-07-12 11:07:44.982659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.145 [2024-07-12 11:07:44.982688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.145 qpair failed and we were unable to recover it. 00:29:28.145 [2024-07-12 11:07:44.983117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.145 [2024-07-12 11:07:44.983163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.145 qpair failed and we were unable to recover it. 00:29:28.145 [2024-07-12 11:07:44.983615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.145 [2024-07-12 11:07:44.983643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.145 qpair failed and we were unable to recover it. 00:29:28.145 [2024-07-12 11:07:44.984081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.145 [2024-07-12 11:07:44.984111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.145 qpair failed and we were unable to recover it. 00:29:28.145 [2024-07-12 11:07:44.984392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.145 [2024-07-12 11:07:44.984421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.145 qpair failed and we were unable to recover it. 00:29:28.145 [2024-07-12 11:07:44.984825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.145 [2024-07-12 11:07:44.984853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.145 qpair failed and we were unable to recover it. 00:29:28.145 [2024-07-12 11:07:44.985268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:44.985298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:44.985731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:44.985758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:44.986196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:44.986226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:44.986657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:44.986685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:44.987111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:44.987148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:44.987569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:44.987597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:44.988029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:44.988058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:44.988494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:44.988523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:44.988949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:44.988978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:44.989405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:44.989434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:44.989686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:44.989713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:44.990181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:44.990211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:44.990637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:44.990665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:44.991089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:44.991116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:44.991555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:44.991583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:44.991858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:44.991886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:44.992318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:44.992347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:44.992749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:44.992777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:44.993189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:44.993219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:44.993640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:44.993668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:44.994093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:44.994145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:44.994380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:44.994408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:44.994682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:44.994710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:44.995146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:44.995177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:44.995428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:44.995455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:44.995858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:44.995886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:44.996153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:44.996182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:44.996462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:44.996491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:44.996899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:44.996928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:44.997354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:44.997383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:44.997681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:44.997709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:44.998147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:44.998176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:44.998616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:44.998645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:44.999055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:44.999082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:44.999447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:44.999475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:44.999785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:44.999823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:45.000247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:45.000277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:45.000522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:45.000550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:45.000979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:45.001007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.146 [2024-07-12 11:07:45.001277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.146 [2024-07-12 11:07:45.001305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.146 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.001822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.001849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.002257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.002289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.002730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.002758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.003177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.003206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.003631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.003660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.004091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.004119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.004390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.004419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.004880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.004908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.005146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.005177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.005588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.005616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.006048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.006077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.006542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.006572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.007053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.007081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.007391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.007421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.007863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.007890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.008325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.008353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.008800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.008829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.009116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.009161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.009383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.009414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.009839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.009868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.010301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.010331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.010776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.010804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.011247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.011276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.011710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.011738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.012169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.012199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.012640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.012668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.013070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.013098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.013538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.013569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.013991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.014021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.014387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.014416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.014728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.014757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.015171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.015226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.015689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.015718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.016101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.016142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.016552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.016582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.017067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.017101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.017519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.017550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.018002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.018032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.018281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.018312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.147 [2024-07-12 11:07:45.018765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.147 [2024-07-12 11:07:45.018794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.147 qpair failed and we were unable to recover it. 00:29:28.148 [2024-07-12 11:07:45.019224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.148 [2024-07-12 11:07:45.019254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.148 qpair failed and we were unable to recover it. 00:29:28.148 [2024-07-12 11:07:45.019675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.148 [2024-07-12 11:07:45.019703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.148 qpair failed and we were unable to recover it. 00:29:28.148 [2024-07-12 11:07:45.020046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.148 [2024-07-12 11:07:45.020074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.148 qpair failed and we were unable to recover it. 00:29:28.148 [2024-07-12 11:07:45.020503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.148 [2024-07-12 11:07:45.020532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.148 qpair failed and we were unable to recover it. 00:29:28.148 [2024-07-12 11:07:45.020969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.148 [2024-07-12 11:07:45.020999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.148 qpair failed and we were unable to recover it. 00:29:28.148 [2024-07-12 11:07:45.021450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.148 [2024-07-12 11:07:45.021479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.148 qpair failed and we were unable to recover it. 00:29:28.148 [2024-07-12 11:07:45.021734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.148 [2024-07-12 11:07:45.021763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.148 qpair failed and we were unable to recover it. 00:29:28.148 [2024-07-12 11:07:45.022055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.148 [2024-07-12 11:07:45.022083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.148 qpair failed and we were unable to recover it. 00:29:28.148 [2024-07-12 11:07:45.022582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.148 [2024-07-12 11:07:45.022611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.148 qpair failed and we were unable to recover it. 00:29:28.148 [2024-07-12 11:07:45.023041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.148 [2024-07-12 11:07:45.023072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.148 qpair failed and we were unable to recover it. 00:29:28.148 [2024-07-12 11:07:45.023488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.148 [2024-07-12 11:07:45.023517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.148 qpair failed and we were unable to recover it. 00:29:28.148 [2024-07-12 11:07:45.023948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.148 [2024-07-12 11:07:45.023976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.148 qpair failed and we were unable to recover it. 00:29:28.148 [2024-07-12 11:07:45.024090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.148 [2024-07-12 11:07:45.024117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.148 qpair failed and we were unable to recover it. 00:29:28.148 [2024-07-12 11:07:45.024503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.148 [2024-07-12 11:07:45.024531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.148 qpair failed and we were unable to recover it. 00:29:28.148 [2024-07-12 11:07:45.024947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.148 [2024-07-12 11:07:45.024975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.148 qpair failed and we were unable to recover it. 00:29:28.148 [2024-07-12 11:07:45.025248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.148 [2024-07-12 11:07:45.025278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.148 qpair failed and we were unable to recover it. 00:29:28.148 [2024-07-12 11:07:45.025702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.148 [2024-07-12 11:07:45.025730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.148 qpair failed and we were unable to recover it. 00:29:28.148 [2024-07-12 11:07:45.025967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.148 [2024-07-12 11:07:45.025995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.148 qpair failed and we were unable to recover it. 00:29:28.148 [2024-07-12 11:07:45.026348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.148 [2024-07-12 11:07:45.026377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.148 qpair failed and we were unable to recover it. 00:29:28.148 [2024-07-12 11:07:45.026693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.148 [2024-07-12 11:07:45.026721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.148 qpair failed and we were unable to recover it. 00:29:28.148 [2024-07-12 11:07:45.027134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.148 [2024-07-12 11:07:45.027163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.148 qpair failed and we were unable to recover it. 00:29:28.148 [2024-07-12 11:07:45.027611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.148 [2024-07-12 11:07:45.027638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.148 qpair failed and we were unable to recover it. 00:29:28.148 [2024-07-12 11:07:45.028083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.148 [2024-07-12 11:07:45.028113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.148 qpair failed and we were unable to recover it. 00:29:28.148 [2024-07-12 11:07:45.028325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.148 [2024-07-12 11:07:45.028352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.148 qpair failed and we were unable to recover it. 00:29:28.148 [2024-07-12 11:07:45.028636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.148 [2024-07-12 11:07:45.028664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.148 qpair failed and we were unable to recover it. 00:29:28.148 [2024-07-12 11:07:45.029093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.148 [2024-07-12 11:07:45.029120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.148 qpair failed and we were unable to recover it. 00:29:28.148 [2024-07-12 11:07:45.029480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.148 [2024-07-12 11:07:45.029510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.148 qpair failed and we were unable to recover it. 00:29:28.148 [2024-07-12 11:07:45.030026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.148 [2024-07-12 11:07:45.030054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.148 qpair failed and we were unable to recover it. 00:29:28.148 [2024-07-12 11:07:45.030488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.148 [2024-07-12 11:07:45.030518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.148 qpair failed and we were unable to recover it. 00:29:28.148 [2024-07-12 11:07:45.030866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.148 [2024-07-12 11:07:45.030894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.148 qpair failed and we were unable to recover it. 00:29:28.148 [2024-07-12 11:07:45.031298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.148 [2024-07-12 11:07:45.031331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.148 qpair failed and we were unable to recover it. 00:29:28.148 [2024-07-12 11:07:45.031756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.148 [2024-07-12 11:07:45.031784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.148 qpair failed and we were unable to recover it. 00:29:28.148 [2024-07-12 11:07:45.032224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.148 [2024-07-12 11:07:45.032255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.148 qpair failed and we were unable to recover it. 00:29:28.148 [2024-07-12 11:07:45.032682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.148 [2024-07-12 11:07:45.032710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.148 qpair failed and we were unable to recover it. 00:29:28.148 [2024-07-12 11:07:45.033148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.148 [2024-07-12 11:07:45.033177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.148 qpair failed and we were unable to recover it. 00:29:28.148 [2024-07-12 11:07:45.033481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.148 [2024-07-12 11:07:45.033515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.148 qpair failed and we were unable to recover it. 00:29:28.148 [2024-07-12 11:07:45.033937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.033965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.034217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.034247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.034692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.034720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.035017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.035047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.035477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.035506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.035948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.035978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.036387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.036416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.036647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.036680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.037100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.037173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.037682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.037711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.038105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.038149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.038584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.038614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.039048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.039077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.039552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.039583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.039884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.039913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.040159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.040189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.040661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.040689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.041133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.041162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.041644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.041673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.042111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.042162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.042603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.042632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.043066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.043094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.043345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.043374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.043636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.043664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.044079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.044108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.044558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.044587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.045033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.045061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.045188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.045216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.045684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.045712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.046070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.046098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.046533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.046562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.047035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.047063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.047383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.047411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.047824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.047852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.048286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.048314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.048747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.048776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.049215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.049244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.049472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.149 [2024-07-12 11:07:45.049500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.149 qpair failed and we were unable to recover it. 00:29:28.149 [2024-07-12 11:07:45.049939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.049966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.050412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.050447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.050757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.050786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.051216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.051246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.051360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.051386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.051794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.051822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.052254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.052283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.052580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.052611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.052953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.052982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.053385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.053414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.053844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.053872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.054307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.054336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.054780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.054807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.055242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.055271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.055690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.055719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.055953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.055981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.056324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.056353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.056795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.056823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.057255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.057285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.057650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.057678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.058116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.058157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.058633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.058662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.059095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.059144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.059572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.059600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.060076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.060106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.060512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.060542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.060845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.060874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.061136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.061167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.061644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.061673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.062114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.062156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.062646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.062675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.063079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.063107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.063545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.063574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.064008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.064035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.064361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.064390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.064827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.064857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.065373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.065479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.065817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.065853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.066287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.066319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.150 [2024-07-12 11:07:45.066734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.150 [2024-07-12 11:07:45.066763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.150 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.067192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.067223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.067710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.067751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.068166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.068197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.068706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.068735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.069207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.069237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.069643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.069671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.070120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.070165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.070588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.070616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.071054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.071082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.071517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.071547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.071954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.071982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.072408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.072439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.072872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.072900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.073380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.073410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.073813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.073840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.074271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.074301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.074720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.074748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.075179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.075209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.075645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.075673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.075995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.076022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.076434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.076465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.076777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.076805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.076967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.076995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.077329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.077358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.077755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.077785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.078227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.078257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.078570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.078606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.079037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.079066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.079500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.079530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.079960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.079987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.080414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.080445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.080871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.080902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.081238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.081268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.081499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.081527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.081970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.081998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.082331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.082360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.082698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.082726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.082849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.082875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.083308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.083338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.083772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.151 [2024-07-12 11:07:45.083800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.151 qpair failed and we were unable to recover it. 00:29:28.151 [2024-07-12 11:07:45.084230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.084260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.084593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.084628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.085073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.085102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.085563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.085592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.085902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.085939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.086376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.086406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.086721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.086749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.087182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.087211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.087660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.087688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.088139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.088170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.088603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.088631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.088928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.088959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.089383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.089413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.089850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.089878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.090311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.090341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.090652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.090680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.091111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.091153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.091394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.091422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.091844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.091871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.092313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.092342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.092783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.092812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.093245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.093275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.093719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.093747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.094182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.094211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.094642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.094670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.095104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.095147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.095454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.095483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.095915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.095943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.096343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.096373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.096807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.096835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.097271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.097301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.097798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.097827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.098236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.098265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.098617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.098645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.098934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.098964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.099298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.099328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.099749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.099778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.100226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.100255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.100687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.100714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.100982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.152 [2024-07-12 11:07:45.101010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.152 qpair failed and we were unable to recover it. 00:29:28.152 [2024-07-12 11:07:45.101439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.153 [2024-07-12 11:07:45.101469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.153 qpair failed and we were unable to recover it. 00:29:28.153 [2024-07-12 11:07:45.101897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.153 [2024-07-12 11:07:45.101933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.153 qpair failed and we were unable to recover it. 00:29:28.153 [2024-07-12 11:07:45.102167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.153 [2024-07-12 11:07:45.102198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.153 qpair failed and we were unable to recover it. 00:29:28.153 [2024-07-12 11:07:45.102623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.153 [2024-07-12 11:07:45.102653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.153 qpair failed and we were unable to recover it. 00:29:28.153 [2024-07-12 11:07:45.103067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.153 [2024-07-12 11:07:45.103094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.153 qpair failed and we were unable to recover it. 00:29:28.153 [2024-07-12 11:07:45.103515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.153 [2024-07-12 11:07:45.103544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.153 qpair failed and we were unable to recover it. 00:29:28.153 [2024-07-12 11:07:45.103856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.153 [2024-07-12 11:07:45.103885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.153 qpair failed and we were unable to recover it. 00:29:28.153 [2024-07-12 11:07:45.104324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.153 [2024-07-12 11:07:45.104354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.153 qpair failed and we were unable to recover it. 00:29:28.153 [2024-07-12 11:07:45.104784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.153 [2024-07-12 11:07:45.104812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.153 qpair failed and we were unable to recover it. 00:29:28.153 [2024-07-12 11:07:45.105273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.153 [2024-07-12 11:07:45.105303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.153 qpair failed and we were unable to recover it. 00:29:28.153 [2024-07-12 11:07:45.105726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.153 [2024-07-12 11:07:45.105754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.153 qpair failed and we were unable to recover it. 00:29:28.153 [2024-07-12 11:07:45.106191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.153 [2024-07-12 11:07:45.106222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.153 qpair failed and we were unable to recover it. 00:29:28.153 [2024-07-12 11:07:45.106651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.153 [2024-07-12 11:07:45.106679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.153 qpair failed and we were unable to recover it. 00:29:28.153 [2024-07-12 11:07:45.106915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.153 [2024-07-12 11:07:45.106944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.153 qpair failed and we were unable to recover it. 00:29:28.153 [2024-07-12 11:07:45.107370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.153 [2024-07-12 11:07:45.107398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.153 qpair failed and we were unable to recover it. 00:29:28.153 [2024-07-12 11:07:45.107620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.153 [2024-07-12 11:07:45.107653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.153 qpair failed and we were unable to recover it. 00:29:28.425 [2024-07-12 11:07:45.108093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-07-12 11:07:45.108134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-07-12 11:07:45.108570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-07-12 11:07:45.108601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-07-12 11:07:45.109030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-07-12 11:07:45.109059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-07-12 11:07:45.109315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-07-12 11:07:45.109346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-07-12 11:07:45.109769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-07-12 11:07:45.109797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-07-12 11:07:45.110231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-07-12 11:07:45.110260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-07-12 11:07:45.110560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-07-12 11:07:45.110592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-07-12 11:07:45.111028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-07-12 11:07:45.111058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-07-12 11:07:45.111534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-07-12 11:07:45.111564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.111968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.111995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.112440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.112469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.112721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.112749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.113172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.113204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.113654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.113682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.114117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.114171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.114586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.114616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.115050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.115079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.115339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.115371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.115795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.115824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.116151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.116180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.116613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.116641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.117076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.117104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.117413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.117446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.117944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.117972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.118397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.118427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.118856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.118891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.119227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.119277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.119705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.119733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.120046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.120073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.120559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.120589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.120798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.120826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.121235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.121264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.121503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.121530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.121974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.122002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.122426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.122456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.122881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.122910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.123150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.123179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.123424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.123459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.123918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.123945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.124382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.124412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.124844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.124873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.125294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.125323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.125765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.125793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.126089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.126119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.126495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.126524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.126999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.127029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.127439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.127469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.127892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.127920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-07-12 11:07:45.128193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-07-12 11:07:45.128222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.128699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.128727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.129159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.129188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.129491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.129520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.130019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.130048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.130495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.130525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.130942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.130969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.131391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.131421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.131850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.131879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.132316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.132345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.132634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.132664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.132934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.132962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.133371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.133401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.133812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.133840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.134274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.134304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.134586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.134614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.135047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.135075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.135506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.135535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.135854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.135883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.136314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.136346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.136772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.136800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.137236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.137264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.137481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.137510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.137944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.137971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.138400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.138429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.138866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.138895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.139329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.139358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.139797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.139827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.140272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.140301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.140738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.140766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.141233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.141263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.141687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.141716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.141991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.142021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.142535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.142565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.142991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.143020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.143267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.143296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.143749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.143777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.144209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.144239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.144670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.144698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.145038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.145067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-07-12 11:07:45.145481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-07-12 11:07:45.145510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.145759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.145787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.146112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.146168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.146416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.146444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.146711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.146746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.147160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.147190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.147570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.147599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.148006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.148035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.148439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.148470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.148882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.148910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.149342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.149371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.149806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.149835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.150077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.150106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.150564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.150592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.151024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.151053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.151488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.151517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.151947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.151976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.152395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.152425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.152850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.152879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.153153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.153182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.153628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.153656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.154096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.154136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.154652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.154681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.154915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.154942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.155365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.155394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.155820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.155848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.156285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.156314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.156749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.156779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.157009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.157038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.157440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.157469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.157900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.157927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.158172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.158201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.158705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.158733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.159151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.159180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.159672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.159700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.160146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.160176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.160460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.160489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.160916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.160946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.161263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.161293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.161728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.161756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-07-12 11:07:45.161870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-07-12 11:07:45.161896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.162311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.162340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.162777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.162804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.163272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.163301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.163749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.163784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.164185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.164214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.164509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.164539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.164963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.164991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.165420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.165450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.165857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.165886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.166332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.166360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.166791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.166819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.167146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.167175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.167469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.167497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.167921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.167951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.168366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.168396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.168824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.168853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.169098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.169139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.169647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.169676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.169957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.169987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.170448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.170477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.170838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.170866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.171288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.171316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.171775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.171802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.171956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.171984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.172440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.172470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.172911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.172939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.173261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.173289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.173718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.173746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.174193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.174222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.174549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.174576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.175016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.175046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.175288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.175320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.175759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.175788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.176227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.176255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.176685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.176713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.177151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.177181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.177421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.177450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.177710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.177739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.178165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.178196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-07-12 11:07:45.178649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-07-12 11:07:45.178678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.430 [2024-07-12 11:07:45.179110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-07-12 11:07:45.179152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-07-12 11:07:45.179602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-07-12 11:07:45.179630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-07-12 11:07:45.179998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-07-12 11:07:45.180026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-07-12 11:07:45.180275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-07-12 11:07:45.180311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-07-12 11:07:45.180760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-07-12 11:07:45.180789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-07-12 11:07:45.181037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-07-12 11:07:45.181065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-07-12 11:07:45.181578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-07-12 11:07:45.181609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-07-12 11:07:45.181877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-07-12 11:07:45.181907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-07-12 11:07:45.182349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-07-12 11:07:45.182379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-07-12 11:07:45.182810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-07-12 11:07:45.182840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-07-12 11:07:45.183273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-07-12 11:07:45.183303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-07-12 11:07:45.183549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-07-12 11:07:45.183577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-07-12 11:07:45.184085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-07-12 11:07:45.184115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-07-12 11:07:45.184535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-07-12 11:07:45.184564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-07-12 11:07:45.184880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-07-12 11:07:45.184908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-07-12 11:07:45.185246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-07-12 11:07:45.185276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-07-12 11:07:45.185716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-07-12 11:07:45.185744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-07-12 11:07:45.186192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-07-12 11:07:45.186222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-07-12 11:07:45.186658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-07-12 11:07:45.186687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-07-12 11:07:45.187133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-07-12 11:07:45.187162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-07-12 11:07:45.187631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-07-12 11:07:45.187659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-07-12 11:07:45.188093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-07-12 11:07:45.188132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-07-12 11:07:45.188624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-07-12 11:07:45.188655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-07-12 11:07:45.189078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-07-12 11:07:45.189106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-07-12 11:07:45.189565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-07-12 11:07:45.189595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-07-12 11:07:45.190025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-07-12 11:07:45.190054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-07-12 11:07:45.190493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-07-12 11:07:45.190523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-07-12 11:07:45.190813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-07-12 11:07:45.190845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-07-12 11:07:45.191279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-07-12 11:07:45.191309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-07-12 11:07:45.191743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-07-12 11:07:45.191771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-07-12 11:07:45.192178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-07-12 11:07:45.192208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-07-12 11:07:45.192449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-07-12 11:07:45.192478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.192798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.192826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.193149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.193178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.193592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.193621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.194054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.194082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.194532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.194562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.194995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.195023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.195512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.195541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.195972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.196002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.196328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.196362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.196707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.196735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.197152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.197183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.197430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.197471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.197958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.197987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.198323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.198352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.198804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.198831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.199265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.199295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.199607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.199634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.199889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.199917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.200357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.200386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.200804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.200832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.201080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.201108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.201610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.201638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.202112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.202152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.202617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.202647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.202894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.202922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.203361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.203392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.203695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.203726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.204182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.204211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.204478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.204508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.204925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.204953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.205233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.205262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.205513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.205541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.206018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.206046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.206516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.206545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.206977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.207007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.207453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.207482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.207918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.207946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.208377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.208406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.208710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.208739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.209172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-07-12 11:07:45.209203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-07-12 11:07:45.209602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.209631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.209882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.209911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.210337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.210365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.210771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.210800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.211233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.211263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.211533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.211563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.211992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.212020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.212434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.212463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.212905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.212933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.213377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.213406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.213645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.213673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.213970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.214006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.214416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.214446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.214865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.214892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.215004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.215031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.215491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.215521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.215771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.215800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.216108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.216160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.216595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.216624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.217048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.217076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.217510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.217539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.217976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.218005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.218309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.218341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.218760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.218788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.219029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.219057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.219562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.219593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.220064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.220092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.220429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.220457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.220757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.220788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.221235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.221265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.221640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.221669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.222085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.222114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.222545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.222575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.222901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.222929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.223374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.223404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.223849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.223877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.224322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.224351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.224785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.224813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.225243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.225274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.225705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.225734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-07-12 11:07:45.226167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-07-12 11:07:45.226196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.226642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.226670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.227102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.227145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.227646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.227674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.227923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.227952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.228355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.228386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.228634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.228661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.229110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.229148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.229395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.229423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.229852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.229882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.230314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.230343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.230822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.230856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.231168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.231196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.231637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.231665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.232095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.232136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.232465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.232493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.232910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.232938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.233383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.233413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.233849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.233877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.234291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.234321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.234761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.234790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.235283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.235312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.235744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.235771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.236204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.236234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.236667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.236695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.237145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.237176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.237664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.237692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.238118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.238158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.238569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.238597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.238836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.238865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.239146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.239175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.239584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.239613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.240089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.240117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.240405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.240433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.240859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.240887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.241442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.241547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.241907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.241940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.242341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.242372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.242834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.242863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.243302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.243333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-07-12 11:07:45.243757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-07-12 11:07:45.243787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.434 [2024-07-12 11:07:45.244039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-07-12 11:07:45.244067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-07-12 11:07:45.244532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-07-12 11:07:45.244562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-07-12 11:07:45.244977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-07-12 11:07:45.245006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-07-12 11:07:45.245430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-07-12 11:07:45.245461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-07-12 11:07:45.245880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-07-12 11:07:45.245909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-07-12 11:07:45.246318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-07-12 11:07:45.246349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-07-12 11:07:45.246760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-07-12 11:07:45.246788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-07-12 11:07:45.247222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-07-12 11:07:45.247251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-07-12 11:07:45.247494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-07-12 11:07:45.247522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-07-12 11:07:45.247949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-07-12 11:07:45.247979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-07-12 11:07:45.248255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-07-12 11:07:45.248291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-07-12 11:07:45.248715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-07-12 11:07:45.248744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-07-12 11:07:45.249195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-07-12 11:07:45.249226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-07-12 11:07:45.249658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-07-12 11:07:45.249686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-07-12 11:07:45.249959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-07-12 11:07:45.249989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-07-12 11:07:45.250300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-07-12 11:07:45.250329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-07-12 11:07:45.250767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-07-12 11:07:45.250795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-07-12 11:07:45.251237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-07-12 11:07:45.251265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-07-12 11:07:45.251713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-07-12 11:07:45.251740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-07-12 11:07:45.252149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-07-12 11:07:45.252180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-07-12 11:07:45.252600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-07-12 11:07:45.252628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-07-12 11:07:45.253093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-07-12 11:07:45.253135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-07-12 11:07:45.253443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-07-12 11:07:45.253472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-07-12 11:07:45.253903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-07-12 11:07:45.253931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-07-12 11:07:45.254102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-07-12 11:07:45.254161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-07-12 11:07:45.254448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-07-12 11:07:45.254477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-07-12 11:07:45.254921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-07-12 11:07:45.254949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-07-12 11:07:45.255141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-07-12 11:07:45.255172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-07-12 11:07:45.255612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-07-12 11:07:45.255641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-07-12 11:07:45.256075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-07-12 11:07:45.256103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-07-12 11:07:45.256417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-07-12 11:07:45.256452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-07-12 11:07:45.256897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-07-12 11:07:45.256927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-07-12 11:07:45.257426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-07-12 11:07:45.257456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-07-12 11:07:45.257857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.257885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.258161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.258191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.258464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.258493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.258981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.259011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.259340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.259370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.259830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.259858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.260292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.260321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.260757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.260785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.261025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.261055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.261497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.261526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.261957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.261984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.262411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.262440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.262871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.262899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.263342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.263372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.263788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.263816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.264055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.264082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.264345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.264373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.264805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.264840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.265135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.265166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.265652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.265681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.266110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.266157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.266604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.266633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.266748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.266774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.267203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.267233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.267655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.267683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.268136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.268167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.268634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.268662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.269099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.269138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.269582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.269611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.270038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.270068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.270493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.270523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.270894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.270922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.271349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.271379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.271846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.271874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.272435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.272539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.272864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.272900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.273237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.273272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.273690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.273718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.274163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.274196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.274607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.274635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-07-12 11:07:45.275069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-07-12 11:07:45.275097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.275520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.275550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.275977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.276006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.276310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.276341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.276794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.276823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.277237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.277268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.277711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.277739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.278010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.278038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.278467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.278496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.278937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.278966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.279403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.279433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.279865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.279893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.280335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.280364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.280777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.280805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.281247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.281277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.281714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.281742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.282181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.282210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.282646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.282686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.283097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.283144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.283652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.283681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.283940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.283970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.284457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.284487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.284912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.284940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.285371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.285400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.285838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.285866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.286145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.286175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.286586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.286614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.286870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.286899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.287205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.287235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.287671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.287699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.288146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.288178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.288595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.288624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.289096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.289136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.289567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.289595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.290030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.290058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.290507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.290537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.290848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.290877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.291192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.291221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.291636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.291664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.292093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.292134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.292595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-07-12 11:07:45.292624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-07-12 11:07:45.293057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.293088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.293443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.293472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.293904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.293932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.294361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.294393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.294788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.294817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.295250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.295280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.295715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.295743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.296186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.296216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.296655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.296683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.297144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.297174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.297587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.297616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.298048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.298077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.298485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.298514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.298676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.298703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.299153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.299184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.299611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.299639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.300075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.300109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.300562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.300591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.300995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.301023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.301457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.301486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.301920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.301949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.302228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.302257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.302760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.302788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.303231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.303261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.303715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.303743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.304180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.304209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.304644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.304672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.305108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.305148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.305569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.305598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.306035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.306064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.306497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.306526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.306957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.306985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.307412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.307442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.307895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.307923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.308338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.308367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.308797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.308825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.309260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.309290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.309732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.309761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.310199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.310228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.310673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-07-12 11:07:45.310701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-07-12 11:07:45.311145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-07-12 11:07:45.311174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-07-12 11:07:45.311597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-07-12 11:07:45.311625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-07-12 11:07:45.311936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-07-12 11:07:45.311973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-07-12 11:07:45.312390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-07-12 11:07:45.312421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-07-12 11:07:45.312712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-07-12 11:07:45.312741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-07-12 11:07:45.313166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-07-12 11:07:45.313196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-07-12 11:07:45.313468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-07-12 11:07:45.313496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-07-12 11:07:45.313791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-07-12 11:07:45.313818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-07-12 11:07:45.314235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-07-12 11:07:45.314265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-07-12 11:07:45.314732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-07-12 11:07:45.314762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-07-12 11:07:45.315187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-07-12 11:07:45.315216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-07-12 11:07:45.315654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-07-12 11:07:45.315681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-07-12 11:07:45.316089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-07-12 11:07:45.316116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-07-12 11:07:45.316562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-07-12 11:07:45.316590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-07-12 11:07:45.316828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-07-12 11:07:45.316856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-07-12 11:07:45.317138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-07-12 11:07:45.317167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-07-12 11:07:45.317413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-07-12 11:07:45.317446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-07-12 11:07:45.317906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-07-12 11:07:45.317935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-07-12 11:07:45.318045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-07-12 11:07:45.318072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-07-12 11:07:45.318578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-07-12 11:07:45.318607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-07-12 11:07:45.319034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-07-12 11:07:45.319062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-07-12 11:07:45.319300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-07-12 11:07:45.319329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-07-12 11:07:45.319775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-07-12 11:07:45.319803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-07-12 11:07:45.320083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-07-12 11:07:45.320111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-07-12 11:07:45.320293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-07-12 11:07:45.320325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-07-12 11:07:45.320711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-07-12 11:07:45.320739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-07-12 11:07:45.321144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-07-12 11:07:45.321173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-07-12 11:07:45.321420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-07-12 11:07:45.321447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-07-12 11:07:45.321872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-07-12 11:07:45.321901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-07-12 11:07:45.322340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-07-12 11:07:45.322369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-07-12 11:07:45.322615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-07-12 11:07:45.322643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-07-12 11:07:45.323085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-07-12 11:07:45.323113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-07-12 11:07:45.323546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-07-12 11:07:45.323574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-07-12 11:07:45.324007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-07-12 11:07:45.324036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-07-12 11:07:45.324464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-07-12 11:07:45.324494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-07-12 11:07:45.324959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.324987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.325412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.325441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.325871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.325899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.326331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.326361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.326809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.326838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.327269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.327298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.327732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.327761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.328243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.328273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.328708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.328737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.329162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.329191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.329446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.329474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.329929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.329958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.330383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.330412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.330569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.330597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.331046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.331075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.331554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.331583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.332014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.332042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.332545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.332575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.332885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.332916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.333190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.333219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.333655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.333683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.334117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.334165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.334648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.334677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.335106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.335147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.335565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.335593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.336022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.336051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.336566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.336598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.337007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.337035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.337308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.337338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.337763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.337791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.338229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.338258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.338679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.338707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.339051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.339079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.339520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.339549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.339827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.339855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.340093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.340121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.340438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.340467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.340901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.340929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.341353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.341384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.341830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.341858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-07-12 11:07:45.342198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-07-12 11:07:45.342227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.342664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.342692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.343004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.343032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.343463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.343493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.343908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.343937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.344244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.344273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.344554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.344583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.344823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.344853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.345266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.345298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.345726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.345754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.345990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.346019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.346448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.346478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.346911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.346939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.347350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.347379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.347752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.347780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.348217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.348246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.348658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.348687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.349096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.349137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.349403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.349431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.349775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.349805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.350234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.350265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.350698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.350727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.351167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.351197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.351666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.351695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.351952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.351980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.352421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.352451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.352895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.352922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.353344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.353373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.353793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.353821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.354251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.354280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.354648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.354676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.354930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.354958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.355465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.355493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.355928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.355957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.356364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.356393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.356787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.356818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.357248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.357279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.357556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.357586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.357856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.357884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.358316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.358346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.358789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.441 [2024-07-12 11:07:45.358817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.441 qpair failed and we were unable to recover it. 00:29:28.441 [2024-07-12 11:07:45.359252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.359282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.359735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.359764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.360014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.360042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.360456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.360486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.360913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.360941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.361354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.361383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.361814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.361842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.362102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.362153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.362439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.362468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.362800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.362829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.363260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.363291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.363626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.363655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.364005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.364035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.364434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.364464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.364898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.364927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.365233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.365262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.365690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.365717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.365977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.366005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.366344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.366373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.366795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.366824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.367257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.367287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.367723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.367751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.368010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.368038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.368469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.368498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.368747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.368775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.369144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.369174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.369631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.369660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.370088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.370116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.370581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.370609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.371053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.371082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.371398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.371431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.371864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.371892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.372325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.372355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.372720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.372748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.373174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.373204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.373720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.373749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.374002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.374030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.374350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.374380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.374810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.374838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.375274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.375303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.442 qpair failed and we were unable to recover it. 00:29:28.442 [2024-07-12 11:07:45.375735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.442 [2024-07-12 11:07:45.375763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.376192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.376222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.376494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.376521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.376954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.376982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.377220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.377248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.377695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.377723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.378171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.378201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.378632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.378667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.379080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.379108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.379554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.379582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.380001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.380029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.380525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.380556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.380965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.380994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.381415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.381444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.381872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.381900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.382260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.382290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.382705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.382732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.383055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.383083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.383518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.383547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.383976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.384005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.384308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.384341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.384650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.384680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.385120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.385161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.385486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.385517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.385774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.385803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.386228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.386257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.386682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.386710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.386967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.386997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.387398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.387427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.387849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.387877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.388142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.388172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.388493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.388521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.388932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.388959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.389216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.389246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.389696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.389725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.389998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.390026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.390460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.390490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.391001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.391029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.391445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.391475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.391895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.391923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.443 [2024-07-12 11:07:45.392179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.443 [2024-07-12 11:07:45.392208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.443 qpair failed and we were unable to recover it. 00:29:28.444 [2024-07-12 11:07:45.392632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.444 [2024-07-12 11:07:45.392659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.444 qpair failed and we were unable to recover it. 00:29:28.444 [2024-07-12 11:07:45.392972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.444 [2024-07-12 11:07:45.393003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.444 qpair failed and we were unable to recover it. 00:29:28.444 [2024-07-12 11:07:45.393412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.444 [2024-07-12 11:07:45.393441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.444 qpair failed and we were unable to recover it. 00:29:28.444 [2024-07-12 11:07:45.393914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.444 [2024-07-12 11:07:45.393943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.444 qpair failed and we were unable to recover it. 00:29:28.444 [2024-07-12 11:07:45.394261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.444 [2024-07-12 11:07:45.394292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.444 qpair failed and we were unable to recover it. 00:29:28.444 [2024-07-12 11:07:45.394716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.444 [2024-07-12 11:07:45.394744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.444 qpair failed and we were unable to recover it. 00:29:28.444 [2024-07-12 11:07:45.395003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.444 [2024-07-12 11:07:45.395038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.444 qpair failed and we were unable to recover it. 00:29:28.444 [2024-07-12 11:07:45.395372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.444 [2024-07-12 11:07:45.395402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.444 qpair failed and we were unable to recover it. 00:29:28.444 [2024-07-12 11:07:45.395817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.444 [2024-07-12 11:07:45.395844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.444 qpair failed and we were unable to recover it. 00:29:28.444 [2024-07-12 11:07:45.396274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.444 [2024-07-12 11:07:45.396305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.444 qpair failed and we were unable to recover it. 00:29:28.444 [2024-07-12 11:07:45.396740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.444 [2024-07-12 11:07:45.396770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.444 qpair failed and we were unable to recover it. 00:29:28.444 [2024-07-12 11:07:45.397010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.444 [2024-07-12 11:07:45.397038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.444 qpair failed and we were unable to recover it. 00:29:28.444 [2024-07-12 11:07:45.397355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.444 [2024-07-12 11:07:45.397386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.444 qpair failed and we were unable to recover it. 00:29:28.714 [2024-07-12 11:07:45.397804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.714 [2024-07-12 11:07:45.397837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.714 qpair failed and we were unable to recover it. 00:29:28.714 [2024-07-12 11:07:45.398263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.714 [2024-07-12 11:07:45.398293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.714 qpair failed and we were unable to recover it. 00:29:28.714 [2024-07-12 11:07:45.398734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.714 [2024-07-12 11:07:45.398765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.714 qpair failed and we were unable to recover it. 00:29:28.714 [2024-07-12 11:07:45.399181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.714 [2024-07-12 11:07:45.399210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.714 qpair failed and we were unable to recover it. 00:29:28.714 [2024-07-12 11:07:45.399542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.714 [2024-07-12 11:07:45.399570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.714 qpair failed and we were unable to recover it. 00:29:28.714 [2024-07-12 11:07:45.400004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.714 [2024-07-12 11:07:45.400034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.714 qpair failed and we were unable to recover it. 00:29:28.714 [2024-07-12 11:07:45.400446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.714 [2024-07-12 11:07:45.400476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.714 qpair failed and we were unable to recover it. 00:29:28.714 [2024-07-12 11:07:45.400912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.714 [2024-07-12 11:07:45.400940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.714 qpair failed and we were unable to recover it. 00:29:28.714 [2024-07-12 11:07:45.401374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.714 [2024-07-12 11:07:45.401405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.714 qpair failed and we were unable to recover it. 00:29:28.714 [2024-07-12 11:07:45.401601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.714 [2024-07-12 11:07:45.401631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.714 qpair failed and we were unable to recover it. 00:29:28.714 [2024-07-12 11:07:45.401904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.714 [2024-07-12 11:07:45.401934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.714 qpair failed and we were unable to recover it. 00:29:28.714 [2024-07-12 11:07:45.402359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.714 [2024-07-12 11:07:45.402388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.714 qpair failed and we were unable to recover it. 00:29:28.714 [2024-07-12 11:07:45.402613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.714 [2024-07-12 11:07:45.402640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.714 qpair failed and we were unable to recover it. 00:29:28.714 [2024-07-12 11:07:45.403074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.714 [2024-07-12 11:07:45.403102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.714 qpair failed and we were unable to recover it. 00:29:28.714 [2024-07-12 11:07:45.403629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.714 [2024-07-12 11:07:45.403659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.714 qpair failed and we were unable to recover it. 00:29:28.714 [2024-07-12 11:07:45.403904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.714 [2024-07-12 11:07:45.403932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.714 qpair failed and we were unable to recover it. 00:29:28.714 [2024-07-12 11:07:45.404366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.714 [2024-07-12 11:07:45.404396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.714 qpair failed and we were unable to recover it. 00:29:28.714 [2024-07-12 11:07:45.404802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.714 [2024-07-12 11:07:45.404830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.714 qpair failed and we were unable to recover it. 00:29:28.714 [2024-07-12 11:07:45.405267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.714 [2024-07-12 11:07:45.405296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.714 qpair failed and we were unable to recover it. 00:29:28.714 [2024-07-12 11:07:45.405712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.714 [2024-07-12 11:07:45.405741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.714 qpair failed and we were unable to recover it. 00:29:28.714 [2024-07-12 11:07:45.406015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.714 [2024-07-12 11:07:45.406043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.714 qpair failed and we were unable to recover it. 00:29:28.714 [2024-07-12 11:07:45.406447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.714 [2024-07-12 11:07:45.406477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.714 qpair failed and we were unable to recover it. 00:29:28.715 [2024-07-12 11:07:45.406907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.715 [2024-07-12 11:07:45.406935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.715 qpair failed and we were unable to recover it. 00:29:28.715 [2024-07-12 11:07:45.407176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.715 [2024-07-12 11:07:45.407204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.715 qpair failed and we were unable to recover it. 00:29:28.715 [2024-07-12 11:07:45.407658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.715 [2024-07-12 11:07:45.407686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.715 qpair failed and we were unable to recover it. 00:29:28.715 [2024-07-12 11:07:45.408135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.715 [2024-07-12 11:07:45.408166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.715 qpair failed and we were unable to recover it. 00:29:28.715 [2024-07-12 11:07:45.408601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.715 [2024-07-12 11:07:45.408629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.715 qpair failed and we were unable to recover it. 00:29:28.715 [2024-07-12 11:07:45.409062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.715 [2024-07-12 11:07:45.409090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.715 qpair failed and we were unable to recover it. 00:29:28.715 [2024-07-12 11:07:45.409526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.715 [2024-07-12 11:07:45.409555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.715 qpair failed and we were unable to recover it. 00:29:28.715 [2024-07-12 11:07:45.409992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.715 [2024-07-12 11:07:45.410021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.715 qpair failed and we were unable to recover it. 00:29:28.715 [2024-07-12 11:07:45.410533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.715 [2024-07-12 11:07:45.410564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.715 qpair failed and we were unable to recover it. 00:29:28.715 [2024-07-12 11:07:45.410741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.715 [2024-07-12 11:07:45.410774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.715 qpair failed and we were unable to recover it. 00:29:28.715 [2024-07-12 11:07:45.411217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.715 [2024-07-12 11:07:45.411246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.715 qpair failed and we were unable to recover it. 00:29:28.715 [2024-07-12 11:07:45.411662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.715 [2024-07-12 11:07:45.411696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.715 qpair failed and we were unable to recover it. 00:29:28.715 [2024-07-12 11:07:45.411976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.715 [2024-07-12 11:07:45.412004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.715 qpair failed and we were unable to recover it. 00:29:28.715 [2024-07-12 11:07:45.412327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.715 [2024-07-12 11:07:45.412358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.715 qpair failed and we were unable to recover it. 00:29:28.715 [2024-07-12 11:07:45.412796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.715 [2024-07-12 11:07:45.412827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.715 qpair failed and we were unable to recover it. 00:29:28.715 [2024-07-12 11:07:45.413271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.715 [2024-07-12 11:07:45.413301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.715 qpair failed and we were unable to recover it. 00:29:28.715 [2024-07-12 11:07:45.413742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.715 [2024-07-12 11:07:45.413771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.715 qpair failed and we were unable to recover it. 00:29:28.715 [2024-07-12 11:07:45.414196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.715 [2024-07-12 11:07:45.414226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.715 qpair failed and we were unable to recover it. 00:29:28.715 [2024-07-12 11:07:45.414671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.715 [2024-07-12 11:07:45.414699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.715 qpair failed and we were unable to recover it. 00:29:28.715 [2024-07-12 11:07:45.414948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.715 [2024-07-12 11:07:45.414978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.715 qpair failed and we were unable to recover it. 00:29:28.715 [2024-07-12 11:07:45.415420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.715 [2024-07-12 11:07:45.415452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.715 qpair failed and we were unable to recover it. 00:29:28.715 [2024-07-12 11:07:45.415763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.715 [2024-07-12 11:07:45.415791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.715 qpair failed and we were unable to recover it. 00:29:28.715 [2024-07-12 11:07:45.416096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.715 [2024-07-12 11:07:45.416142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.715 qpair failed and we were unable to recover it. 00:29:28.715 [2024-07-12 11:07:45.416573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.715 [2024-07-12 11:07:45.416602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.715 qpair failed and we were unable to recover it. 00:29:28.715 [2024-07-12 11:07:45.417038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.715 [2024-07-12 11:07:45.417067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.715 qpair failed and we were unable to recover it. 00:29:28.715 [2024-07-12 11:07:45.417510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.715 [2024-07-12 11:07:45.417539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.715 qpair failed and we were unable to recover it. 00:29:28.715 [2024-07-12 11:07:45.417968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.715 [2024-07-12 11:07:45.417995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.715 qpair failed and we were unable to recover it. 00:29:28.715 [2024-07-12 11:07:45.418415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.715 [2024-07-12 11:07:45.418445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.715 qpair failed and we were unable to recover it. 00:29:28.715 [2024-07-12 11:07:45.418856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.715 [2024-07-12 11:07:45.418884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.715 qpair failed and we were unable to recover it. 00:29:28.716 [2024-07-12 11:07:45.419297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.716 [2024-07-12 11:07:45.419327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.716 qpair failed and we were unable to recover it. 00:29:28.716 [2024-07-12 11:07:45.419761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.716 [2024-07-12 11:07:45.419797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.716 qpair failed and we were unable to recover it. 00:29:28.716 [2024-07-12 11:07:45.420209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.716 [2024-07-12 11:07:45.420238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.716 qpair failed and we were unable to recover it. 00:29:28.716 [2024-07-12 11:07:45.420684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.716 [2024-07-12 11:07:45.420712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.716 qpair failed and we were unable to recover it. 00:29:28.716 [2024-07-12 11:07:45.421149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.716 [2024-07-12 11:07:45.421178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.716 qpair failed and we were unable to recover it. 00:29:28.716 [2024-07-12 11:07:45.421680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.716 [2024-07-12 11:07:45.421709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.716 qpair failed and we were unable to recover it. 00:29:28.716 [2024-07-12 11:07:45.422150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.716 [2024-07-12 11:07:45.422181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.716 qpair failed and we were unable to recover it. 00:29:28.716 [2024-07-12 11:07:45.422504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.716 [2024-07-12 11:07:45.422533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.716 qpair failed and we were unable to recover it. 00:29:28.716 [2024-07-12 11:07:45.422935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.716 [2024-07-12 11:07:45.422963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.716 qpair failed and we were unable to recover it. 00:29:28.716 [2024-07-12 11:07:45.423403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.716 [2024-07-12 11:07:45.423434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.716 qpair failed and we were unable to recover it. 00:29:28.716 [2024-07-12 11:07:45.423741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.716 [2024-07-12 11:07:45.423769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.716 qpair failed and we were unable to recover it. 00:29:28.716 [2024-07-12 11:07:45.424040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.716 [2024-07-12 11:07:45.424069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.716 qpair failed and we were unable to recover it. 00:29:28.716 [2024-07-12 11:07:45.424505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.716 [2024-07-12 11:07:45.424535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.716 qpair failed and we were unable to recover it. 00:29:28.716 [2024-07-12 11:07:45.424770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.716 [2024-07-12 11:07:45.424798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.716 qpair failed and we were unable to recover it. 00:29:28.716 [2024-07-12 11:07:45.424928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.716 [2024-07-12 11:07:45.424957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.716 qpair failed and we were unable to recover it. 00:29:28.716 [2024-07-12 11:07:45.425266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.716 [2024-07-12 11:07:45.425297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.716 qpair failed and we were unable to recover it. 00:29:28.716 [2024-07-12 11:07:45.425756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.716 [2024-07-12 11:07:45.425784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.716 qpair failed and we were unable to recover it. 00:29:28.716 [2024-07-12 11:07:45.426154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.716 [2024-07-12 11:07:45.426184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.716 qpair failed and we were unable to recover it. 00:29:28.716 [2024-07-12 11:07:45.426617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.716 [2024-07-12 11:07:45.426646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.716 qpair failed and we were unable to recover it. 00:29:28.716 [2024-07-12 11:07:45.427096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.716 [2024-07-12 11:07:45.427151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.716 qpair failed and we were unable to recover it. 00:29:28.716 [2024-07-12 11:07:45.427581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.716 [2024-07-12 11:07:45.427610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.716 qpair failed and we were unable to recover it. 00:29:28.716 [2024-07-12 11:07:45.427869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.716 [2024-07-12 11:07:45.427899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.716 qpair failed and we were unable to recover it. 00:29:28.716 [2024-07-12 11:07:45.428227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.716 [2024-07-12 11:07:45.428264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.716 qpair failed and we were unable to recover it. 00:29:28.716 [2024-07-12 11:07:45.428590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.716 [2024-07-12 11:07:45.428620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.716 qpair failed and we were unable to recover it. 00:29:28.716 [2024-07-12 11:07:45.429036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.716 [2024-07-12 11:07:45.429064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.716 qpair failed and we were unable to recover it. 00:29:28.716 [2024-07-12 11:07:45.429304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.716 [2024-07-12 11:07:45.429334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.716 qpair failed and we were unable to recover it. 00:29:28.716 [2024-07-12 11:07:45.429797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.716 [2024-07-12 11:07:45.429825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.716 qpair failed and we were unable to recover it. 00:29:28.716 [2024-07-12 11:07:45.430069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.716 [2024-07-12 11:07:45.430097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.716 qpair failed and we were unable to recover it. 00:29:28.716 [2024-07-12 11:07:45.430607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.716 [2024-07-12 11:07:45.430637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.717 qpair failed and we were unable to recover it. 00:29:28.717 [2024-07-12 11:07:45.431062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.717 [2024-07-12 11:07:45.431090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.717 qpair failed and we were unable to recover it. 00:29:28.717 [2024-07-12 11:07:45.431347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.717 [2024-07-12 11:07:45.431378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.717 qpair failed and we were unable to recover it. 00:29:28.717 [2024-07-12 11:07:45.431822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.717 [2024-07-12 11:07:45.431851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.717 qpair failed and we were unable to recover it. 00:29:28.717 [2024-07-12 11:07:45.432134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.717 [2024-07-12 11:07:45.432163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.717 qpair failed and we were unable to recover it. 00:29:28.717 [2024-07-12 11:07:45.432611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.717 [2024-07-12 11:07:45.432639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.717 qpair failed and we were unable to recover it. 00:29:28.717 [2024-07-12 11:07:45.432913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.717 [2024-07-12 11:07:45.432942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.717 qpair failed and we were unable to recover it. 00:29:28.717 [2024-07-12 11:07:45.433360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.717 [2024-07-12 11:07:45.433391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.717 qpair failed and we were unable to recover it. 00:29:28.717 [2024-07-12 11:07:45.433804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.717 [2024-07-12 11:07:45.433833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.717 qpair failed and we were unable to recover it. 00:29:28.717 [2024-07-12 11:07:45.434316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.717 [2024-07-12 11:07:45.434346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.717 qpair failed and we were unable to recover it. 00:29:28.717 [2024-07-12 11:07:45.434792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.717 [2024-07-12 11:07:45.434821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.717 qpair failed and we were unable to recover it. 00:29:28.717 [2024-07-12 11:07:45.435256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.717 [2024-07-12 11:07:45.435285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.717 qpair failed and we were unable to recover it. 00:29:28.717 [2024-07-12 11:07:45.435628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.717 [2024-07-12 11:07:45.435657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.717 qpair failed and we were unable to recover it. 00:29:28.717 [2024-07-12 11:07:45.435966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.717 [2024-07-12 11:07:45.435997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.717 qpair failed and we were unable to recover it. 00:29:28.717 [2024-07-12 11:07:45.436320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.717 [2024-07-12 11:07:45.436350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.717 qpair failed and we were unable to recover it. 00:29:28.717 [2024-07-12 11:07:45.436602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.717 [2024-07-12 11:07:45.436631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.717 qpair failed and we were unable to recover it. 00:29:28.717 [2024-07-12 11:07:45.436960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.717 [2024-07-12 11:07:45.436989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.717 qpair failed and we were unable to recover it. 00:29:28.717 [2024-07-12 11:07:45.437418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.717 [2024-07-12 11:07:45.437449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.717 qpair failed and we were unable to recover it. 00:29:28.717 [2024-07-12 11:07:45.437702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.717 [2024-07-12 11:07:45.437730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.717 qpair failed and we were unable to recover it. 00:29:28.717 [2024-07-12 11:07:45.438158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.717 [2024-07-12 11:07:45.438190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.717 qpair failed and we were unable to recover it. 00:29:28.717 [2024-07-12 11:07:45.438607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.717 [2024-07-12 11:07:45.438636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.717 qpair failed and we were unable to recover it. 00:29:28.717 [2024-07-12 11:07:45.439160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.717 [2024-07-12 11:07:45.439191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.717 qpair failed and we were unable to recover it. 00:29:28.717 [2024-07-12 11:07:45.439613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.717 [2024-07-12 11:07:45.439642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.717 qpair failed and we were unable to recover it. 00:29:28.717 [2024-07-12 11:07:45.440075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.717 [2024-07-12 11:07:45.440104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.717 qpair failed and we were unable to recover it. 00:29:28.717 [2024-07-12 11:07:45.440567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.717 [2024-07-12 11:07:45.440596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.717 qpair failed and we were unable to recover it. 00:29:28.717 [2024-07-12 11:07:45.441023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.717 [2024-07-12 11:07:45.441054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.717 qpair failed and we were unable to recover it. 00:29:28.717 [2024-07-12 11:07:45.441480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.717 [2024-07-12 11:07:45.441511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.717 qpair failed and we were unable to recover it. 00:29:28.717 [2024-07-12 11:07:45.441986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.717 [2024-07-12 11:07:45.442016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.717 qpair failed and we were unable to recover it. 00:29:28.717 [2024-07-12 11:07:45.442427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.717 [2024-07-12 11:07:45.442458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.717 qpair failed and we were unable to recover it. 00:29:28.717 [2024-07-12 11:07:45.442888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.717 [2024-07-12 11:07:45.442917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.717 qpair failed and we were unable to recover it. 00:29:28.717 [2024-07-12 11:07:45.443177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.717 [2024-07-12 11:07:45.443207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.717 qpair failed and we were unable to recover it. 00:29:28.718 [2024-07-12 11:07:45.443518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.718 [2024-07-12 11:07:45.443547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.718 qpair failed and we were unable to recover it. 00:29:28.718 [2024-07-12 11:07:45.443830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.718 [2024-07-12 11:07:45.443859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.718 qpair failed and we were unable to recover it. 00:29:28.718 [2024-07-12 11:07:45.444286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.718 [2024-07-12 11:07:45.444315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.718 qpair failed and we were unable to recover it. 00:29:28.718 [2024-07-12 11:07:45.444481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.718 [2024-07-12 11:07:45.444518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.718 qpair failed and we were unable to recover it. 00:29:28.718 [2024-07-12 11:07:45.444780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.718 [2024-07-12 11:07:45.444809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.718 qpair failed and we were unable to recover it. 00:29:28.718 [2024-07-12 11:07:45.445222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.718 [2024-07-12 11:07:45.445254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.718 qpair failed and we were unable to recover it. 00:29:28.718 [2024-07-12 11:07:45.445674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.718 [2024-07-12 11:07:45.445703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.718 qpair failed and we were unable to recover it. 00:29:28.718 [2024-07-12 11:07:45.446137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.718 [2024-07-12 11:07:45.446167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.718 qpair failed and we were unable to recover it. 00:29:28.718 [2024-07-12 11:07:45.446614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.718 [2024-07-12 11:07:45.446643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.718 qpair failed and we were unable to recover it. 00:29:28.718 [2024-07-12 11:07:45.447090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.718 [2024-07-12 11:07:45.447119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.718 qpair failed and we were unable to recover it. 00:29:28.718 [2024-07-12 11:07:45.447576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.718 [2024-07-12 11:07:45.447607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.718 qpair failed and we were unable to recover it. 00:29:28.718 [2024-07-12 11:07:45.447925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.718 [2024-07-12 11:07:45.447955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.718 qpair failed and we were unable to recover it. 00:29:28.718 [2024-07-12 11:07:45.448404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.718 [2024-07-12 11:07:45.448433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.718 qpair failed and we were unable to recover it. 00:29:28.718 [2024-07-12 11:07:45.448840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.718 [2024-07-12 11:07:45.448869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.718 qpair failed and we were unable to recover it. 00:29:28.718 [2024-07-12 11:07:45.449302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.718 [2024-07-12 11:07:45.449333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.718 qpair failed and we were unable to recover it. 00:29:28.718 [2024-07-12 11:07:45.449570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.718 [2024-07-12 11:07:45.449599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.718 qpair failed and we were unable to recover it. 00:29:28.718 [2024-07-12 11:07:45.450144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.718 [2024-07-12 11:07:45.450175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.718 qpair failed and we were unable to recover it. 00:29:28.718 [2024-07-12 11:07:45.450612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.718 [2024-07-12 11:07:45.450642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.718 qpair failed and we were unable to recover it. 00:29:28.718 [2024-07-12 11:07:45.450879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.718 [2024-07-12 11:07:45.450908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.718 qpair failed and we were unable to recover it. 00:29:28.718 [2024-07-12 11:07:45.451159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.718 [2024-07-12 11:07:45.451188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.718 qpair failed and we were unable to recover it. 00:29:28.718 [2024-07-12 11:07:45.451558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.718 [2024-07-12 11:07:45.451587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.718 qpair failed and we were unable to recover it. 00:29:28.718 [2024-07-12 11:07:45.452026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.718 [2024-07-12 11:07:45.452055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.718 qpair failed and we were unable to recover it. 00:29:28.718 [2024-07-12 11:07:45.452463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.718 [2024-07-12 11:07:45.452493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.718 qpair failed and we were unable to recover it. 00:29:28.718 [2024-07-12 11:07:45.452903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.718 [2024-07-12 11:07:45.452932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.718 qpair failed and we were unable to recover it. 00:29:28.718 [2024-07-12 11:07:45.453286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.718 [2024-07-12 11:07:45.453317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.718 qpair failed and we were unable to recover it. 00:29:28.718 [2024-07-12 11:07:45.453747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.718 [2024-07-12 11:07:45.453776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.718 qpair failed and we were unable to recover it. 00:29:28.718 [2024-07-12 11:07:45.454012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.718 [2024-07-12 11:07:45.454041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.718 qpair failed and we were unable to recover it. 00:29:28.718 [2024-07-12 11:07:45.454470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.718 [2024-07-12 11:07:45.454501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.718 qpair failed and we were unable to recover it. 00:29:28.718 [2024-07-12 11:07:45.454930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.718 [2024-07-12 11:07:45.454961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.718 qpair failed and we were unable to recover it. 00:29:28.718 [2024-07-12 11:07:45.455394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.718 [2024-07-12 11:07:45.455423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.718 qpair failed and we were unable to recover it. 00:29:28.718 [2024-07-12 11:07:45.455842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.719 [2024-07-12 11:07:45.455870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.719 qpair failed and we were unable to recover it. 00:29:28.719 [2024-07-12 11:07:45.456114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.719 [2024-07-12 11:07:45.456156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.719 qpair failed and we were unable to recover it. 00:29:28.719 [2024-07-12 11:07:45.456406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.719 [2024-07-12 11:07:45.456436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.719 qpair failed and we were unable to recover it. 00:29:28.719 [2024-07-12 11:07:45.456718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.719 [2024-07-12 11:07:45.456747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.719 qpair failed and we were unable to recover it. 00:29:28.719 [2024-07-12 11:07:45.456978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.719 [2024-07-12 11:07:45.457009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.719 qpair failed and we were unable to recover it. 00:29:28.719 [2024-07-12 11:07:45.457434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.719 [2024-07-12 11:07:45.457465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.719 qpair failed and we were unable to recover it. 00:29:28.719 [2024-07-12 11:07:45.457781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.719 [2024-07-12 11:07:45.457809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.719 qpair failed and we were unable to recover it. 00:29:28.719 [2024-07-12 11:07:45.458114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.719 [2024-07-12 11:07:45.458159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.719 qpair failed and we were unable to recover it. 00:29:28.719 [2024-07-12 11:07:45.458613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.719 [2024-07-12 11:07:45.458642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.719 qpair failed and we were unable to recover it. 00:29:28.719 [2024-07-12 11:07:45.459081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.719 [2024-07-12 11:07:45.459109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.719 qpair failed and we were unable to recover it. 00:29:28.719 [2024-07-12 11:07:45.459563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.719 [2024-07-12 11:07:45.459592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.719 qpair failed and we were unable to recover it. 00:29:28.719 [2024-07-12 11:07:45.460027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.719 [2024-07-12 11:07:45.460055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.719 qpair failed and we were unable to recover it. 00:29:28.719 [2024-07-12 11:07:45.460274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.719 [2024-07-12 11:07:45.460304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.719 qpair failed and we were unable to recover it. 00:29:28.719 [2024-07-12 11:07:45.460740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.719 [2024-07-12 11:07:45.460775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.719 qpair failed and we were unable to recover it. 00:29:28.719 [2024-07-12 11:07:45.461203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.719 [2024-07-12 11:07:45.461234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.719 qpair failed and we were unable to recover it. 00:29:28.719 [2024-07-12 11:07:45.461654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.719 [2024-07-12 11:07:45.461684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.719 qpair failed and we were unable to recover it. 00:29:28.719 [2024-07-12 11:07:45.462120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.719 [2024-07-12 11:07:45.462160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.719 qpair failed and we were unable to recover it. 00:29:28.719 [2024-07-12 11:07:45.462573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.719 [2024-07-12 11:07:45.462601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.719 qpair failed and we were unable to recover it. 00:29:28.719 [2024-07-12 11:07:45.463039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.719 [2024-07-12 11:07:45.463068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.719 qpair failed and we were unable to recover it. 00:29:28.719 [2024-07-12 11:07:45.463351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.719 [2024-07-12 11:07:45.463381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.719 qpair failed and we were unable to recover it. 00:29:28.719 [2024-07-12 11:07:45.463830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.719 [2024-07-12 11:07:45.463860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.719 qpair failed and we were unable to recover it. 00:29:28.719 [2024-07-12 11:07:45.464290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.719 [2024-07-12 11:07:45.464320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.719 qpair failed and we were unable to recover it. 00:29:28.719 [2024-07-12 11:07:45.464592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.719 [2024-07-12 11:07:45.464620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.719 qpair failed and we were unable to recover it. 00:29:28.719 [2024-07-12 11:07:45.465056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.719 [2024-07-12 11:07:45.465084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.719 qpair failed and we were unable to recover it. 00:29:28.719 [2024-07-12 11:07:45.465528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.719 [2024-07-12 11:07:45.465558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.719 qpair failed and we were unable to recover it. 00:29:28.719 [2024-07-12 11:07:45.465980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.719 [2024-07-12 11:07:45.466010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.719 qpair failed and we were unable to recover it. 00:29:28.719 [2024-07-12 11:07:45.466430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.719 [2024-07-12 11:07:45.466461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.719 qpair failed and we were unable to recover it. 00:29:28.719 [2024-07-12 11:07:45.466884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.719 [2024-07-12 11:07:45.466913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.719 qpair failed and we were unable to recover it. 00:29:28.719 [2024-07-12 11:07:45.467348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.719 [2024-07-12 11:07:45.467378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.719 qpair failed and we were unable to recover it. 00:29:28.720 [2024-07-12 11:07:45.467690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.720 [2024-07-12 11:07:45.467721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.720 qpair failed and we were unable to recover it. 00:29:28.720 [2024-07-12 11:07:45.468209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.720 [2024-07-12 11:07:45.468238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.720 qpair failed and we were unable to recover it. 00:29:28.720 [2024-07-12 11:07:45.468639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.720 [2024-07-12 11:07:45.468669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.720 qpair failed and we were unable to recover it. 00:29:28.720 [2024-07-12 11:07:45.469090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.720 [2024-07-12 11:07:45.469118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.720 qpair failed and we were unable to recover it. 00:29:28.720 [2024-07-12 11:07:45.469568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.720 [2024-07-12 11:07:45.469598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.720 qpair failed and we were unable to recover it. 00:29:28.720 [2024-07-12 11:07:45.469916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.720 [2024-07-12 11:07:45.469946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.720 qpair failed and we were unable to recover it. 00:29:28.720 [2024-07-12 11:07:45.470373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.720 [2024-07-12 11:07:45.470403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.720 qpair failed and we were unable to recover it. 00:29:28.720 [2024-07-12 11:07:45.470899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.720 [2024-07-12 11:07:45.470929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.720 qpair failed and we were unable to recover it. 00:29:28.720 [2024-07-12 11:07:45.471359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.720 [2024-07-12 11:07:45.471389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.720 qpair failed and we were unable to recover it. 00:29:28.720 [2024-07-12 11:07:45.471809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.720 [2024-07-12 11:07:45.471837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.720 qpair failed and we were unable to recover it. 00:29:28.720 [2024-07-12 11:07:45.472271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.720 [2024-07-12 11:07:45.472300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.720 qpair failed and we were unable to recover it. 00:29:28.720 [2024-07-12 11:07:45.472755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.720 [2024-07-12 11:07:45.472785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.720 qpair failed and we were unable to recover it. 00:29:28.720 [2024-07-12 11:07:45.473217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.720 [2024-07-12 11:07:45.473248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.720 qpair failed and we were unable to recover it. 00:29:28.720 [2024-07-12 11:07:45.473674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.720 [2024-07-12 11:07:45.473704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.720 qpair failed and we were unable to recover it. 00:29:28.720 [2024-07-12 11:07:45.474146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.720 [2024-07-12 11:07:45.474177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.720 qpair failed and we were unable to recover it. 00:29:28.720 [2024-07-12 11:07:45.474588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.720 [2024-07-12 11:07:45.474617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.720 qpair failed and we were unable to recover it. 00:29:28.720 [2024-07-12 11:07:45.474954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.720 [2024-07-12 11:07:45.474984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.720 qpair failed and we were unable to recover it. 00:29:28.720 [2024-07-12 11:07:45.475214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.720 [2024-07-12 11:07:45.475243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.720 qpair failed and we were unable to recover it. 00:29:28.720 [2024-07-12 11:07:45.475703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.720 [2024-07-12 11:07:45.475733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.720 qpair failed and we were unable to recover it. 00:29:28.720 [2024-07-12 11:07:45.476166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.720 [2024-07-12 11:07:45.476196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.720 qpair failed and we were unable to recover it. 00:29:28.720 [2024-07-12 11:07:45.476434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.720 [2024-07-12 11:07:45.476462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.720 qpair failed and we were unable to recover it. 00:29:28.720 [2024-07-12 11:07:45.476876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.720 [2024-07-12 11:07:45.476905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.720 qpair failed and we were unable to recover it. 00:29:28.720 [2024-07-12 11:07:45.477342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.720 [2024-07-12 11:07:45.477372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.720 qpair failed and we were unable to recover it. 00:29:28.721 [2024-07-12 11:07:45.477792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.721 [2024-07-12 11:07:45.477822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.721 qpair failed and we were unable to recover it. 00:29:28.721 [2024-07-12 11:07:45.478256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.721 [2024-07-12 11:07:45.478292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.721 qpair failed and we were unable to recover it. 00:29:28.721 [2024-07-12 11:07:45.478525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.721 [2024-07-12 11:07:45.478555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.721 qpair failed and we were unable to recover it. 00:29:28.721 [2024-07-12 11:07:45.478966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.721 [2024-07-12 11:07:45.478995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.721 qpair failed and we were unable to recover it. 00:29:28.721 [2024-07-12 11:07:45.479458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.721 [2024-07-12 11:07:45.479489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.721 qpair failed and we were unable to recover it. 00:29:28.721 [2024-07-12 11:07:45.479904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.721 [2024-07-12 11:07:45.479934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.721 qpair failed and we were unable to recover it. 00:29:28.721 [2024-07-12 11:07:45.480345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.721 [2024-07-12 11:07:45.480374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.721 qpair failed and we were unable to recover it. 00:29:28.721 [2024-07-12 11:07:45.480688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.721 [2024-07-12 11:07:45.480716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.721 qpair failed and we were unable to recover it. 00:29:28.721 [2024-07-12 11:07:45.481166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.721 [2024-07-12 11:07:45.481196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.721 qpair failed and we were unable to recover it. 00:29:28.721 [2024-07-12 11:07:45.481460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.721 [2024-07-12 11:07:45.481488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.721 qpair failed and we were unable to recover it. 00:29:28.721 [2024-07-12 11:07:45.481908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.721 [2024-07-12 11:07:45.481936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.721 qpair failed and we were unable to recover it. 00:29:28.721 [2024-07-12 11:07:45.482250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.721 [2024-07-12 11:07:45.482280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.721 qpair failed and we were unable to recover it. 00:29:28.721 [2024-07-12 11:07:45.482592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.721 [2024-07-12 11:07:45.482620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.721 qpair failed and we were unable to recover it. 00:29:28.721 [2024-07-12 11:07:45.483046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.721 [2024-07-12 11:07:45.483074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.721 qpair failed and we were unable to recover it. 00:29:28.721 [2024-07-12 11:07:45.483325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.721 [2024-07-12 11:07:45.483355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.721 qpair failed and we were unable to recover it. 00:29:28.721 [2024-07-12 11:07:45.483664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.721 [2024-07-12 11:07:45.483694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.721 qpair failed and we were unable to recover it. 00:29:28.721 [2024-07-12 11:07:45.484166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.721 [2024-07-12 11:07:45.484197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.721 qpair failed and we were unable to recover it. 00:29:28.721 [2024-07-12 11:07:45.484637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.721 [2024-07-12 11:07:45.484665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.721 qpair failed and we were unable to recover it. 00:29:28.721 [2024-07-12 11:07:45.485104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.721 [2024-07-12 11:07:45.485147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.721 qpair failed and we were unable to recover it. 00:29:28.721 [2024-07-12 11:07:45.485621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.721 [2024-07-12 11:07:45.485649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.721 qpair failed and we were unable to recover it. 00:29:28.721 [2024-07-12 11:07:45.486080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.721 [2024-07-12 11:07:45.486108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.721 qpair failed and we were unable to recover it. 00:29:28.721 [2024-07-12 11:07:45.486600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.721 [2024-07-12 11:07:45.486628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.721 qpair failed and we were unable to recover it. 00:29:28.721 [2024-07-12 11:07:45.487074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.721 [2024-07-12 11:07:45.487103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.721 qpair failed and we were unable to recover it. 00:29:28.721 [2024-07-12 11:07:45.487542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.721 [2024-07-12 11:07:45.487571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.721 qpair failed and we were unable to recover it. 00:29:28.721 [2024-07-12 11:07:45.488002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.721 [2024-07-12 11:07:45.488031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.721 qpair failed and we were unable to recover it. 00:29:28.721 [2024-07-12 11:07:45.488363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.721 [2024-07-12 11:07:45.488394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.721 qpair failed and we were unable to recover it. 00:29:28.721 [2024-07-12 11:07:45.488842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.721 [2024-07-12 11:07:45.488870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.721 qpair failed and we were unable to recover it. 00:29:28.721 [2024-07-12 11:07:45.489321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.721 [2024-07-12 11:07:45.489352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.721 qpair failed and we were unable to recover it. 00:29:28.721 [2024-07-12 11:07:45.489790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.721 [2024-07-12 11:07:45.489819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.721 qpair failed and we were unable to recover it. 00:29:28.721 [2024-07-12 11:07:45.490099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.722 [2024-07-12 11:07:45.490139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.722 qpair failed and we were unable to recover it. 00:29:28.722 [2024-07-12 11:07:45.490557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.722 [2024-07-12 11:07:45.490585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.722 qpair failed and we were unable to recover it. 00:29:28.722 [2024-07-12 11:07:45.491042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.722 [2024-07-12 11:07:45.491072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.722 qpair failed and we were unable to recover it. 00:29:28.722 [2024-07-12 11:07:45.491566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.722 [2024-07-12 11:07:45.491595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.722 qpair failed and we were unable to recover it. 00:29:28.722 [2024-07-12 11:07:45.491912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.722 [2024-07-12 11:07:45.491940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.722 qpair failed and we were unable to recover it. 00:29:28.722 [2024-07-12 11:07:45.492358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.722 [2024-07-12 11:07:45.492387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.722 qpair failed and we were unable to recover it. 00:29:28.722 [2024-07-12 11:07:45.492826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.722 [2024-07-12 11:07:45.492854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.722 qpair failed and we were unable to recover it. 00:29:28.722 [2024-07-12 11:07:45.493195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.722 [2024-07-12 11:07:45.493249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.722 qpair failed and we were unable to recover it. 00:29:28.722 [2024-07-12 11:07:45.493536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.722 [2024-07-12 11:07:45.493564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.722 qpair failed and we were unable to recover it. 00:29:28.722 [2024-07-12 11:07:45.493901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.722 [2024-07-12 11:07:45.493929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.722 qpair failed and we were unable to recover it. 00:29:28.722 [2024-07-12 11:07:45.494324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.722 [2024-07-12 11:07:45.494353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.722 qpair failed and we were unable to recover it. 00:29:28.722 [2024-07-12 11:07:45.494664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.722 [2024-07-12 11:07:45.494695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.722 qpair failed and we were unable to recover it. 00:29:28.722 [2024-07-12 11:07:45.494995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.722 [2024-07-12 11:07:45.495032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.722 qpair failed and we were unable to recover it. 00:29:28.722 [2024-07-12 11:07:45.495452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.722 [2024-07-12 11:07:45.495482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.722 qpair failed and we were unable to recover it. 00:29:28.722 [2024-07-12 11:07:45.495916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.722 [2024-07-12 11:07:45.495945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.722 qpair failed and we were unable to recover it. 00:29:28.722 [2024-07-12 11:07:45.496358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.722 [2024-07-12 11:07:45.496388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.722 qpair failed and we were unable to recover it. 00:29:28.722 [2024-07-12 11:07:45.496813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.722 [2024-07-12 11:07:45.496841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.722 qpair failed and we were unable to recover it. 00:29:28.722 [2024-07-12 11:07:45.497346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.722 [2024-07-12 11:07:45.497375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.722 qpair failed and we were unable to recover it. 00:29:28.722 [2024-07-12 11:07:45.497810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.722 [2024-07-12 11:07:45.497838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.722 qpair failed and we were unable to recover it. 00:29:28.722 [2024-07-12 11:07:45.498085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.722 [2024-07-12 11:07:45.498112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.722 qpair failed and we were unable to recover it. 00:29:28.722 [2024-07-12 11:07:45.498580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.722 [2024-07-12 11:07:45.498609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.722 qpair failed and we were unable to recover it. 00:29:28.722 [2024-07-12 11:07:45.499044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.722 [2024-07-12 11:07:45.499072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.722 qpair failed and we were unable to recover it. 00:29:28.722 [2024-07-12 11:07:45.499369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.722 [2024-07-12 11:07:45.499402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.722 qpair failed and we were unable to recover it. 00:29:28.722 [2024-07-12 11:07:45.499809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.722 [2024-07-12 11:07:45.499837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.722 qpair failed and we were unable to recover it. 00:29:28.722 [2024-07-12 11:07:45.500274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.722 [2024-07-12 11:07:45.500304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.722 qpair failed and we were unable to recover it. 00:29:28.722 [2024-07-12 11:07:45.500721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.722 [2024-07-12 11:07:45.500749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.722 qpair failed and we were unable to recover it. 00:29:28.722 [2024-07-12 11:07:45.500960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.722 [2024-07-12 11:07:45.500988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.722 qpair failed and we were unable to recover it. 00:29:28.722 [2024-07-12 11:07:45.501396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.722 [2024-07-12 11:07:45.501424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.722 qpair failed and we were unable to recover it. 00:29:28.722 [2024-07-12 11:07:45.501854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.722 [2024-07-12 11:07:45.501882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.722 qpair failed and we were unable to recover it. 00:29:28.722 [2024-07-12 11:07:45.502319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.722 [2024-07-12 11:07:45.502349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.722 qpair failed and we were unable to recover it. 00:29:28.722 [2024-07-12 11:07:45.502676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.722 [2024-07-12 11:07:45.502704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.722 qpair failed and we were unable to recover it. 00:29:28.722 [2024-07-12 11:07:45.502994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.722 [2024-07-12 11:07:45.503025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.722 qpair failed and we were unable to recover it. 00:29:28.722 [2024-07-12 11:07:45.503334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.723 [2024-07-12 11:07:45.503365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.723 qpair failed and we were unable to recover it. 00:29:28.723 [2024-07-12 11:07:45.503820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.723 [2024-07-12 11:07:45.503848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.723 qpair failed and we were unable to recover it. 00:29:28.723 [2024-07-12 11:07:45.504258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.723 [2024-07-12 11:07:45.504288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.723 qpair failed and we were unable to recover it. 00:29:28.723 [2024-07-12 11:07:45.504724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.723 [2024-07-12 11:07:45.504754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.723 qpair failed and we were unable to recover it. 00:29:28.723 [2024-07-12 11:07:45.505068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.723 [2024-07-12 11:07:45.505096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.723 qpair failed and we were unable to recover it. 00:29:28.723 [2024-07-12 11:07:45.505435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.723 [2024-07-12 11:07:45.505464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.723 qpair failed and we were unable to recover it. 00:29:28.723 [2024-07-12 11:07:45.505979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.723 [2024-07-12 11:07:45.506007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.723 qpair failed and we were unable to recover it. 00:29:28.723 [2024-07-12 11:07:45.506293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.723 [2024-07-12 11:07:45.506323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.723 qpair failed and we were unable to recover it. 00:29:28.723 [2024-07-12 11:07:45.506755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.723 [2024-07-12 11:07:45.506783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.723 qpair failed and we were unable to recover it. 00:29:28.723 [2024-07-12 11:07:45.507216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.723 [2024-07-12 11:07:45.507246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.723 qpair failed and we were unable to recover it. 00:29:28.723 [2024-07-12 11:07:45.507727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.723 [2024-07-12 11:07:45.507756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.723 qpair failed and we were unable to recover it. 00:29:28.723 [2024-07-12 11:07:45.508186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.723 [2024-07-12 11:07:45.508217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.723 qpair failed and we were unable to recover it. 00:29:28.723 [2024-07-12 11:07:45.508649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.723 [2024-07-12 11:07:45.508678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.723 qpair failed and we were unable to recover it. 00:29:28.723 [2024-07-12 11:07:45.509104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.723 [2024-07-12 11:07:45.509144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.723 qpair failed and we were unable to recover it. 00:29:28.723 [2024-07-12 11:07:45.509443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.723 [2024-07-12 11:07:45.509471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.723 qpair failed and we were unable to recover it. 00:29:28.723 [2024-07-12 11:07:45.509726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.723 [2024-07-12 11:07:45.509754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.723 qpair failed and we were unable to recover it. 00:29:28.723 [2024-07-12 11:07:45.510170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.723 [2024-07-12 11:07:45.510200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.723 qpair failed and we were unable to recover it. 00:29:28.723 [2024-07-12 11:07:45.510632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.723 [2024-07-12 11:07:45.510659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.723 qpair failed and we were unable to recover it. 00:29:28.723 [2024-07-12 11:07:45.510930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.723 [2024-07-12 11:07:45.510958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.723 qpair failed and we were unable to recover it. 00:29:28.723 [2024-07-12 11:07:45.511364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.723 [2024-07-12 11:07:45.511393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.723 qpair failed and we were unable to recover it. 00:29:28.723 [2024-07-12 11:07:45.511712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.723 [2024-07-12 11:07:45.511746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.723 qpair failed and we were unable to recover it. 00:29:28.723 [2024-07-12 11:07:45.512160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.723 [2024-07-12 11:07:45.512190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.723 qpair failed and we were unable to recover it. 00:29:28.723 [2024-07-12 11:07:45.512643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.723 [2024-07-12 11:07:45.512671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.723 qpair failed and we were unable to recover it. 00:29:28.723 [2024-07-12 11:07:45.513009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.723 [2024-07-12 11:07:45.513037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.723 qpair failed and we were unable to recover it. 00:29:28.723 [2024-07-12 11:07:45.513452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.723 [2024-07-12 11:07:45.513481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.723 qpair failed and we were unable to recover it. 00:29:28.723 [2024-07-12 11:07:45.513727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.723 [2024-07-12 11:07:45.513755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.723 qpair failed and we were unable to recover it. 00:29:28.723 [2024-07-12 11:07:45.514173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.723 [2024-07-12 11:07:45.514202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.723 qpair failed and we were unable to recover it. 00:29:28.723 [2024-07-12 11:07:45.514638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.723 [2024-07-12 11:07:45.514666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.723 qpair failed and we were unable to recover it. 00:29:28.723 [2024-07-12 11:07:45.515004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.723 [2024-07-12 11:07:45.515033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.723 qpair failed and we were unable to recover it. 00:29:28.723 [2024-07-12 11:07:45.515446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.723 [2024-07-12 11:07:45.515476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.723 qpair failed and we were unable to recover it. 00:29:28.723 [2024-07-12 11:07:45.515908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.723 [2024-07-12 11:07:45.515936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.723 qpair failed and we were unable to recover it. 00:29:28.724 [2024-07-12 11:07:45.516367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.724 [2024-07-12 11:07:45.516397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.724 qpair failed and we were unable to recover it. 00:29:28.724 [2024-07-12 11:07:45.516809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.724 [2024-07-12 11:07:45.516838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.724 qpair failed and we were unable to recover it. 00:29:28.724 [2024-07-12 11:07:45.517291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.724 [2024-07-12 11:07:45.517320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.724 qpair failed and we were unable to recover it. 00:29:28.724 [2024-07-12 11:07:45.517548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.724 [2024-07-12 11:07:45.517583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.724 qpair failed and we were unable to recover it. 00:29:28.724 11:07:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:28.724 [2024-07-12 11:07:45.517875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.724 [2024-07-12 11:07:45.517903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.724 qpair failed and we were unable to recover it. 00:29:28.724 11:07:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:28.724 [2024-07-12 11:07:45.518315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.724 [2024-07-12 11:07:45.518344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.724 qpair failed and we were unable to recover it. 00:29:28.724 11:07:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:28.724 [2024-07-12 11:07:45.518657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.724 [2024-07-12 11:07:45.518685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.724 qpair failed and we were unable to recover it. 00:29:28.724 11:07:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:28.724 11:07:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:28.724 [2024-07-12 11:07:45.519134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.724 [2024-07-12 11:07:45.519164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.724 qpair failed and we were unable to recover it. 00:29:28.724 [2024-07-12 11:07:45.519601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.724 [2024-07-12 11:07:45.519629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.724 qpair failed and we were unable to recover it. 00:29:28.724 [2024-07-12 11:07:45.520061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.724 [2024-07-12 11:07:45.520090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.724 qpair failed and we were unable to recover it. 00:29:28.724 [2024-07-12 11:07:45.520544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.724 [2024-07-12 11:07:45.520574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.724 qpair failed and we were unable to recover it. 00:29:28.724 [2024-07-12 11:07:45.520988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.724 [2024-07-12 11:07:45.521016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.724 qpair failed and we were unable to recover it. 00:29:28.724 [2024-07-12 11:07:45.521436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.724 [2024-07-12 11:07:45.521467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.724 qpair failed and we were unable to recover it. 00:29:28.724 [2024-07-12 11:07:45.521848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.724 [2024-07-12 11:07:45.521877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.724 qpair failed and we were unable to recover it. 00:29:28.724 [2024-07-12 11:07:45.522269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.724 [2024-07-12 11:07:45.522307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.724 qpair failed and we were unable to recover it. 00:29:28.724 [2024-07-12 11:07:45.522750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.724 [2024-07-12 11:07:45.522778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.724 qpair failed and we were unable to recover it. 00:29:28.724 [2024-07-12 11:07:45.523207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.724 [2024-07-12 11:07:45.523238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.724 qpair failed and we were unable to recover it. 00:29:28.724 [2024-07-12 11:07:45.523664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.724 [2024-07-12 11:07:45.523695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.724 qpair failed and we were unable to recover it. 00:29:28.724 [2024-07-12 11:07:45.524144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.724 [2024-07-12 11:07:45.524175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.724 qpair failed and we were unable to recover it. 00:29:28.724 [2024-07-12 11:07:45.524494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.724 [2024-07-12 11:07:45.524525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.724 qpair failed and we were unable to recover it. 00:29:28.724 [2024-07-12 11:07:45.524939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.724 [2024-07-12 11:07:45.524967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.724 qpair failed and we were unable to recover it. 00:29:28.724 [2024-07-12 11:07:45.525238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.724 [2024-07-12 11:07:45.525269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.724 qpair failed and we were unable to recover it. 00:29:28.724 [2024-07-12 11:07:45.525711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.724 [2024-07-12 11:07:45.525741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.724 qpair failed and we were unable to recover it. 00:29:28.724 [2024-07-12 11:07:45.526164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.724 [2024-07-12 11:07:45.526193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.724 qpair failed and we were unable to recover it. 00:29:28.724 [2024-07-12 11:07:45.526730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.724 [2024-07-12 11:07:45.526762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.724 qpair failed and we were unable to recover it. 00:29:28.724 [2024-07-12 11:07:45.527157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.724 [2024-07-12 11:07:45.527186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.724 qpair failed and we were unable to recover it. 00:29:28.724 [2024-07-12 11:07:45.527648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.724 [2024-07-12 11:07:45.527678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.724 qpair failed and we were unable to recover it. 00:29:28.724 [2024-07-12 11:07:45.528109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.724 [2024-07-12 11:07:45.528150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.724 qpair failed and we were unable to recover it. 00:29:28.724 [2024-07-12 11:07:45.528611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.724 [2024-07-12 11:07:45.528640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.724 qpair failed and we were unable to recover it. 00:29:28.724 [2024-07-12 11:07:45.528877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.724 [2024-07-12 11:07:45.528906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.724 qpair failed and we were unable to recover it. 00:29:28.724 [2024-07-12 11:07:45.529346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.724 [2024-07-12 11:07:45.529376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.724 qpair failed and we were unable to recover it. 00:29:28.724 [2024-07-12 11:07:45.529774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.725 [2024-07-12 11:07:45.529802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.725 qpair failed and we were unable to recover it. 00:29:28.725 [2024-07-12 11:07:45.530223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.725 [2024-07-12 11:07:45.530252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.725 qpair failed and we were unable to recover it. 00:29:28.725 [2024-07-12 11:07:45.530689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.725 [2024-07-12 11:07:45.530722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.725 qpair failed and we were unable to recover it. 00:29:28.725 [2024-07-12 11:07:45.531146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.725 [2024-07-12 11:07:45.531175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.725 qpair failed and we were unable to recover it. 00:29:28.725 [2024-07-12 11:07:45.531643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.725 [2024-07-12 11:07:45.531672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.725 qpair failed and we were unable to recover it. 00:29:28.725 [2024-07-12 11:07:45.532110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.725 [2024-07-12 11:07:45.532162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.725 qpair failed and we were unable to recover it. 00:29:28.725 [2024-07-12 11:07:45.532617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.725 [2024-07-12 11:07:45.532645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.725 qpair failed and we were unable to recover it. 00:29:28.725 [2024-07-12 11:07:45.533077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.725 [2024-07-12 11:07:45.533107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.725 qpair failed and we were unable to recover it. 00:29:28.725 [2024-07-12 11:07:45.533399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.725 [2024-07-12 11:07:45.533429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.725 qpair failed and we were unable to recover it. 00:29:28.725 [2024-07-12 11:07:45.533824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.725 [2024-07-12 11:07:45.533852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.725 qpair failed and we were unable to recover it. 00:29:28.725 [2024-07-12 11:07:45.534301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.725 [2024-07-12 11:07:45.534331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.725 qpair failed and we were unable to recover it. 00:29:28.725 [2024-07-12 11:07:45.534776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.725 [2024-07-12 11:07:45.534804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.725 qpair failed and we were unable to recover it. 00:29:28.725 [2024-07-12 11:07:45.535247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.725 [2024-07-12 11:07:45.535278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.725 qpair failed and we were unable to recover it. 00:29:28.725 [2024-07-12 11:07:45.535402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.725 [2024-07-12 11:07:45.535430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.725 qpair failed and we were unable to recover it. 00:29:28.725 [2024-07-12 11:07:45.535882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.725 [2024-07-12 11:07:45.535914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.725 qpair failed and we were unable to recover it. 00:29:28.725 [2024-07-12 11:07:45.536238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.725 [2024-07-12 11:07:45.536272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.725 qpair failed and we were unable to recover it. 00:29:28.725 [2024-07-12 11:07:45.536677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.725 [2024-07-12 11:07:45.536705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.725 qpair failed and we were unable to recover it. 00:29:28.725 [2024-07-12 11:07:45.537146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.725 [2024-07-12 11:07:45.537176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.725 qpair failed and we were unable to recover it. 00:29:28.725 [2024-07-12 11:07:45.537437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.725 [2024-07-12 11:07:45.537466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.725 qpair failed and we were unable to recover it. 00:29:28.725 [2024-07-12 11:07:45.537914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.725 [2024-07-12 11:07:45.537943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.725 qpair failed and we were unable to recover it. 00:29:28.725 [2024-07-12 11:07:45.538318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.725 [2024-07-12 11:07:45.538348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.725 qpair failed and we were unable to recover it. 00:29:28.725 [2024-07-12 11:07:45.538754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.725 [2024-07-12 11:07:45.538783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.725 qpair failed and we were unable to recover it. 00:29:28.725 [2024-07-12 11:07:45.539053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.725 [2024-07-12 11:07:45.539081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.725 qpair failed and we were unable to recover it. 00:29:28.726 [2024-07-12 11:07:45.539524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.726 [2024-07-12 11:07:45.539561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.726 qpair failed and we were unable to recover it. 00:29:28.726 [2024-07-12 11:07:45.539973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.726 [2024-07-12 11:07:45.540002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.726 qpair failed and we were unable to recover it. 00:29:28.726 [2024-07-12 11:07:45.540281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.726 [2024-07-12 11:07:45.540311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.726 qpair failed and we were unable to recover it. 00:29:28.726 [2024-07-12 11:07:45.540563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.726 [2024-07-12 11:07:45.540592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.726 qpair failed and we were unable to recover it. 00:29:28.726 [2024-07-12 11:07:45.540997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.726 [2024-07-12 11:07:45.541025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.726 qpair failed and we were unable to recover it. 00:29:28.726 [2024-07-12 11:07:45.541359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.726 [2024-07-12 11:07:45.541389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.726 qpair failed and we were unable to recover it. 00:29:28.726 [2024-07-12 11:07:45.541824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.726 [2024-07-12 11:07:45.541854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.726 qpair failed and we were unable to recover it. 00:29:28.726 [2024-07-12 11:07:45.542223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.726 [2024-07-12 11:07:45.542254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.726 qpair failed and we were unable to recover it. 00:29:28.726 [2024-07-12 11:07:45.542670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.726 [2024-07-12 11:07:45.542699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.726 qpair failed and we were unable to recover it. 00:29:28.726 [2024-07-12 11:07:45.543115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.726 [2024-07-12 11:07:45.543180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.726 qpair failed and we were unable to recover it. 00:29:28.726 [2024-07-12 11:07:45.543407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.726 [2024-07-12 11:07:45.543435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.726 qpair failed and we were unable to recover it. 00:29:28.726 [2024-07-12 11:07:45.543855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.726 [2024-07-12 11:07:45.543883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.726 qpair failed and we were unable to recover it. 00:29:28.726 [2024-07-12 11:07:45.544200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.726 [2024-07-12 11:07:45.544233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.726 qpair failed and we were unable to recover it. 00:29:28.726 [2024-07-12 11:07:45.544555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.726 [2024-07-12 11:07:45.544586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.726 qpair failed and we were unable to recover it. 00:29:28.726 [2024-07-12 11:07:45.544868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.726 [2024-07-12 11:07:45.544899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.726 qpair failed and we were unable to recover it. 00:29:28.726 [2024-07-12 11:07:45.545308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.726 [2024-07-12 11:07:45.545338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.726 qpair failed and we were unable to recover it. 00:29:28.726 [2024-07-12 11:07:45.545655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.726 [2024-07-12 11:07:45.545685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.726 qpair failed and we were unable to recover it. 00:29:28.726 [2024-07-12 11:07:45.546152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.726 [2024-07-12 11:07:45.546183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.726 qpair failed and we were unable to recover it. 00:29:28.726 [2024-07-12 11:07:45.546618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.726 [2024-07-12 11:07:45.546648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.726 qpair failed and we were unable to recover it. 00:29:28.726 [2024-07-12 11:07:45.546960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.726 [2024-07-12 11:07:45.546988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.726 qpair failed and we were unable to recover it. 00:29:28.726 [2024-07-12 11:07:45.547238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.726 [2024-07-12 11:07:45.547266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.726 qpair failed and we were unable to recover it. 00:29:28.726 [2024-07-12 11:07:45.547697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.726 [2024-07-12 11:07:45.547726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.726 qpair failed and we were unable to recover it. 00:29:28.726 [2024-07-12 11:07:45.548161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.726 [2024-07-12 11:07:45.548190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.726 qpair failed and we were unable to recover it. 00:29:28.726 [2024-07-12 11:07:45.548622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.726 [2024-07-12 11:07:45.548653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.726 qpair failed and we were unable to recover it. 00:29:28.726 [2024-07-12 11:07:45.549144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.726 [2024-07-12 11:07:45.549176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.726 qpair failed and we were unable to recover it. 00:29:28.726 [2024-07-12 11:07:45.549624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.726 [2024-07-12 11:07:45.549652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.726 qpair failed and we were unable to recover it. 00:29:28.726 [2024-07-12 11:07:45.550092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.726 [2024-07-12 11:07:45.550121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.726 qpair failed and we were unable to recover it. 00:29:28.726 [2024-07-12 11:07:45.550657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.726 [2024-07-12 11:07:45.550686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.726 qpair failed and we were unable to recover it. 00:29:28.726 [2024-07-12 11:07:45.551135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.726 [2024-07-12 11:07:45.551166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.726 qpair failed and we were unable to recover it. 00:29:28.726 [2024-07-12 11:07:45.551593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.726 [2024-07-12 11:07:45.551622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.727 qpair failed and we were unable to recover it. 00:29:28.727 [2024-07-12 11:07:45.552062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.727 [2024-07-12 11:07:45.552091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.727 qpair failed and we were unable to recover it. 00:29:28.727 [2024-07-12 11:07:45.552518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.727 [2024-07-12 11:07:45.552548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.727 qpair failed and we were unable to recover it. 00:29:28.727 [2024-07-12 11:07:45.552661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.727 [2024-07-12 11:07:45.552687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.727 qpair failed and we were unable to recover it. 00:29:28.727 [2024-07-12 11:07:45.553148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.727 [2024-07-12 11:07:45.553179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.727 qpair failed and we were unable to recover it. 00:29:28.727 [2024-07-12 11:07:45.553599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.727 [2024-07-12 11:07:45.553629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.727 qpair failed and we were unable to recover it. 00:29:28.727 [2024-07-12 11:07:45.553983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.727 [2024-07-12 11:07:45.554012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.727 qpair failed and we were unable to recover it. 00:29:28.727 [2024-07-12 11:07:45.554420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.727 [2024-07-12 11:07:45.554451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.727 qpair failed and we were unable to recover it. 00:29:28.727 [2024-07-12 11:07:45.554866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.727 [2024-07-12 11:07:45.554896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.727 qpair failed and we were unable to recover it. 00:29:28.727 [2024-07-12 11:07:45.555265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.727 [2024-07-12 11:07:45.555295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.727 qpair failed and we were unable to recover it. 00:29:28.727 [2024-07-12 11:07:45.555721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.727 [2024-07-12 11:07:45.555749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.727 qpair failed and we were unable to recover it. 00:29:28.727 [2024-07-12 11:07:45.555948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.727 [2024-07-12 11:07:45.555981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.727 qpair failed and we were unable to recover it. 00:29:28.727 [2024-07-12 11:07:45.556260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.727 [2024-07-12 11:07:45.556290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.727 qpair failed and we were unable to recover it. 00:29:28.727 [2024-07-12 11:07:45.556701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.727 [2024-07-12 11:07:45.556731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.727 qpair failed and we were unable to recover it. 00:29:28.727 [2024-07-12 11:07:45.556988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.727 [2024-07-12 11:07:45.557017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.727 qpair failed and we were unable to recover it. 00:29:28.727 [2024-07-12 11:07:45.557443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.727 [2024-07-12 11:07:45.557473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.727 qpair failed and we were unable to recover it. 00:29:28.727 [2024-07-12 11:07:45.557909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.727 [2024-07-12 11:07:45.557939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.727 qpair failed and we were unable to recover it. 00:29:28.727 [2024-07-12 11:07:45.558300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.727 [2024-07-12 11:07:45.558330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.727 qpair failed and we were unable to recover it. 00:29:28.727 [2024-07-12 11:07:45.558796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.727 [2024-07-12 11:07:45.558826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.727 qpair failed and we were unable to recover it. 00:29:28.727 [2024-07-12 11:07:45.559316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.727 [2024-07-12 11:07:45.559346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.727 qpair failed and we were unable to recover it. 00:29:28.727 [2024-07-12 11:07:45.559787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.727 [2024-07-12 11:07:45.559815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.727 qpair failed and we were unable to recover it. 00:29:28.727 [2024-07-12 11:07:45.560107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.727 [2024-07-12 11:07:45.560147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.727 qpair failed and we were unable to recover it. 00:29:28.727 11:07:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:28.727 [2024-07-12 11:07:45.560407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.727 [2024-07-12 11:07:45.560441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.727 qpair failed and we were unable to recover it. 00:29:28.727 [2024-07-12 11:07:45.560702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.727 [2024-07-12 11:07:45.560734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.727 qpair failed and we were unable to recover it. 00:29:28.727 11:07:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:28.727 11:07:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.727 [2024-07-12 11:07:45.561185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.727 [2024-07-12 11:07:45.561217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.727 qpair failed and we were unable to recover it. 00:29:28.727 11:07:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:28.727 [2024-07-12 11:07:45.561638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.727 [2024-07-12 11:07:45.561667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.727 qpair failed and we were unable to recover it. 00:29:28.727 [2024-07-12 11:07:45.561915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.727 [2024-07-12 11:07:45.561944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.727 qpair failed and we were unable to recover it. 00:29:28.727 [2024-07-12 11:07:45.562233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.727 [2024-07-12 11:07:45.562265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.727 qpair failed and we were unable to recover it. 00:29:28.727 [2024-07-12 11:07:45.562695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.727 [2024-07-12 11:07:45.562724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.727 qpair failed and we were unable to recover it. 00:29:28.728 [2024-07-12 11:07:45.563160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.728 [2024-07-12 11:07:45.563189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.728 qpair failed and we were unable to recover it. 00:29:28.728 [2024-07-12 11:07:45.563525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.728 [2024-07-12 11:07:45.563554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.728 qpair failed and we were unable to recover it. 00:29:28.728 [2024-07-12 11:07:45.563970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.728 [2024-07-12 11:07:45.563999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.728 qpair failed and we were unable to recover it. 00:29:28.728 [2024-07-12 11:07:45.564435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.728 [2024-07-12 11:07:45.564465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.728 qpair failed and we were unable to recover it. 00:29:28.728 [2024-07-12 11:07:45.564735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.728 [2024-07-12 11:07:45.564764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.728 qpair failed and we were unable to recover it. 00:29:28.728 [2024-07-12 11:07:45.565187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.728 [2024-07-12 11:07:45.565216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.728 qpair failed and we were unable to recover it. 00:29:28.728 [2024-07-12 11:07:45.565640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.728 [2024-07-12 11:07:45.565667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.728 qpair failed and we were unable to recover it. 00:29:28.728 [2024-07-12 11:07:45.566084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.728 [2024-07-12 11:07:45.566113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.728 qpair failed and we were unable to recover it. 00:29:28.728 [2024-07-12 11:07:45.566547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.728 [2024-07-12 11:07:45.566575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.728 qpair failed and we were unable to recover it. 00:29:28.728 [2024-07-12 11:07:45.567106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.728 [2024-07-12 11:07:45.567153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.728 qpair failed and we were unable to recover it. 00:29:28.728 [2024-07-12 11:07:45.567595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.728 [2024-07-12 11:07:45.567623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.728 qpair failed and we were unable to recover it. 00:29:28.728 [2024-07-12 11:07:45.568053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.728 [2024-07-12 11:07:45.568081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.728 qpair failed and we were unable to recover it. 00:29:28.728 [2024-07-12 11:07:45.568361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.728 [2024-07-12 11:07:45.568391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.728 qpair failed and we were unable to recover it. 00:29:28.728 [2024-07-12 11:07:45.568810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.728 [2024-07-12 11:07:45.568839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.728 qpair failed and we were unable to recover it. 00:29:28.728 [2024-07-12 11:07:45.569262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.728 [2024-07-12 11:07:45.569293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.728 qpair failed and we were unable to recover it. 00:29:28.728 [2024-07-12 11:07:45.569729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.728 [2024-07-12 11:07:45.569757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.728 qpair failed and we were unable to recover it. 00:29:28.728 [2024-07-12 11:07:45.570078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.728 [2024-07-12 11:07:45.570108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.728 qpair failed and we were unable to recover it. 00:29:28.728 [2024-07-12 11:07:45.570377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.728 [2024-07-12 11:07:45.570406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.728 qpair failed and we were unable to recover it. 00:29:28.728 [2024-07-12 11:07:45.570892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.728 [2024-07-12 11:07:45.570920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.728 qpair failed and we were unable to recover it. 00:29:28.728 [2024-07-12 11:07:45.571445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.728 [2024-07-12 11:07:45.571476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.728 qpair failed and we were unable to recover it. 00:29:28.728 [2024-07-12 11:07:45.571591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.728 [2024-07-12 11:07:45.571624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.728 qpair failed and we were unable to recover it. 00:29:28.728 [2024-07-12 11:07:45.571966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.728 [2024-07-12 11:07:45.571995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.728 qpair failed and we were unable to recover it. 00:29:28.728 [2024-07-12 11:07:45.572449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.728 [2024-07-12 11:07:45.572479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.728 qpair failed and we were unable to recover it. 00:29:28.728 [2024-07-12 11:07:45.572911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.728 [2024-07-12 11:07:45.572939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.728 qpair failed and we were unable to recover it. 00:29:28.728 [2024-07-12 11:07:45.573455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.728 [2024-07-12 11:07:45.573485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.728 qpair failed and we were unable to recover it. 00:29:28.728 [2024-07-12 11:07:45.573811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.728 [2024-07-12 11:07:45.573842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.728 qpair failed and we were unable to recover it. 00:29:28.728 [2024-07-12 11:07:45.574178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.728 [2024-07-12 11:07:45.574207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.728 qpair failed and we were unable to recover it. 00:29:28.728 [2024-07-12 11:07:45.574626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.728 [2024-07-12 11:07:45.574655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.728 qpair failed and we were unable to recover it. 00:29:28.728 [2024-07-12 11:07:45.575091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.728 [2024-07-12 11:07:45.575120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.728 qpair failed and we were unable to recover it. 00:29:28.728 [2024-07-12 11:07:45.575381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.728 [2024-07-12 11:07:45.575410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.728 qpair failed and we were unable to recover it. 00:29:28.728 [2024-07-12 11:07:45.575807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.728 [2024-07-12 11:07:45.575838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.728 qpair failed and we were unable to recover it. 00:29:28.728 [2024-07-12 11:07:45.576257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.728 [2024-07-12 11:07:45.576287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.728 qpair failed and we were unable to recover it. 00:29:28.728 [2024-07-12 11:07:45.576721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.729 [2024-07-12 11:07:45.576749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.729 qpair failed and we were unable to recover it. 00:29:28.729 [2024-07-12 11:07:45.577155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.729 [2024-07-12 11:07:45.577185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.729 qpair failed and we were unable to recover it. 00:29:28.729 [2024-07-12 11:07:45.577617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.729 [2024-07-12 11:07:45.577646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.729 qpair failed and we were unable to recover it. 00:29:28.729 [2024-07-12 11:07:45.578080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.729 [2024-07-12 11:07:45.578109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.729 qpair failed and we were unable to recover it. 00:29:28.729 [2024-07-12 11:07:45.578532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.729 [2024-07-12 11:07:45.578563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.729 qpair failed and we were unable to recover it. 00:29:28.729 [2024-07-12 11:07:45.578987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.729 [2024-07-12 11:07:45.579016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.729 qpair failed and we were unable to recover it. 00:29:28.729 [2024-07-12 11:07:45.579348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.729 [2024-07-12 11:07:45.579378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.729 qpair failed and we were unable to recover it. 00:29:28.729 [2024-07-12 11:07:45.579686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.729 [2024-07-12 11:07:45.579714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.729 qpair failed and we were unable to recover it. 00:29:28.729 [2024-07-12 11:07:45.580032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.729 [2024-07-12 11:07:45.580062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.729 qpair failed and we were unable to recover it. 00:29:28.729 [2024-07-12 11:07:45.580347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.729 [2024-07-12 11:07:45.580376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.729 qpair failed and we were unable to recover it. 00:29:28.729 [2024-07-12 11:07:45.580691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.729 [2024-07-12 11:07:45.580719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.729 qpair failed and we were unable to recover it. 00:29:28.729 [2024-07-12 11:07:45.581153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.729 [2024-07-12 11:07:45.581182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.729 qpair failed and we were unable to recover it. 00:29:28.729 [2024-07-12 11:07:45.581614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.729 [2024-07-12 11:07:45.581642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.729 qpair failed and we were unable to recover it. 00:29:28.729 [2024-07-12 11:07:45.581961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.729 [2024-07-12 11:07:45.581989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.729 qpair failed and we were unable to recover it. 00:29:28.729 [2024-07-12 11:07:45.582455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.729 [2024-07-12 11:07:45.582485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.729 qpair failed and we were unable to recover it. 00:29:28.729 [2024-07-12 11:07:45.582916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.729 [2024-07-12 11:07:45.582946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.729 qpair failed and we were unable to recover it. 00:29:28.729 [2024-07-12 11:07:45.583380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.729 [2024-07-12 11:07:45.583410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.729 qpair failed and we were unable to recover it. 00:29:28.729 [2024-07-12 11:07:45.583841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.729 [2024-07-12 11:07:45.583869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.729 qpair failed and we were unable to recover it. 00:29:28.729 Malloc0 00:29:28.729 [2024-07-12 11:07:45.584303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.729 [2024-07-12 11:07:45.584332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.729 qpair failed and we were unable to recover it. 00:29:28.729 [2024-07-12 11:07:45.584738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.729 [2024-07-12 11:07:45.584768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.729 qpair failed and we were unable to recover it. 00:29:28.729 11:07:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.729 [2024-07-12 11:07:45.585186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.729 [2024-07-12 11:07:45.585215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.729 qpair failed and we were unable to recover it. 00:29:28.729 11:07:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:28.729 [2024-07-12 11:07:45.585507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.729 [2024-07-12 11:07:45.585536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.729 qpair failed and we were unable to recover it. 00:29:28.729 11:07:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.729 11:07:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:28.729 [2024-07-12 11:07:45.585992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.729 [2024-07-12 11:07:45.586020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.729 qpair failed and we were unable to recover it. 00:29:28.729 [2024-07-12 11:07:45.586570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.729 [2024-07-12 11:07:45.586599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.729 qpair failed and we were unable to recover it. 00:29:28.729 [2024-07-12 11:07:45.587031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.729 [2024-07-12 11:07:45.587061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.729 qpair failed and we were unable to recover it. 00:29:28.729 [2024-07-12 11:07:45.587463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.729 [2024-07-12 11:07:45.587493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.729 qpair failed and we were unable to recover it. 00:29:28.729 [2024-07-12 11:07:45.587749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.729 [2024-07-12 11:07:45.587777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.729 qpair failed and we were unable to recover it. 00:29:28.729 [2024-07-12 11:07:45.588203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.729 [2024-07-12 11:07:45.588232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.729 qpair failed and we were unable to recover it. 00:29:28.729 [2024-07-12 11:07:45.588646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.729 [2024-07-12 11:07:45.588673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.729 qpair failed and we were unable to recover it. 00:29:28.730 [2024-07-12 11:07:45.589112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.730 [2024-07-12 11:07:45.589156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.730 qpair failed and we were unable to recover it. 00:29:28.730 [2024-07-12 11:07:45.589652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.730 [2024-07-12 11:07:45.589681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.730 qpair failed and we were unable to recover it. 00:29:28.730 [2024-07-12 11:07:45.589953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.730 [2024-07-12 11:07:45.589981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.730 qpair failed and we were unable to recover it. 00:29:28.730 [2024-07-12 11:07:45.590287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.730 [2024-07-12 11:07:45.590316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.730 qpair failed and we were unable to recover it. 00:29:28.730 [2024-07-12 11:07:45.590689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.730 [2024-07-12 11:07:45.590717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.730 qpair failed and we were unable to recover it. 00:29:28.730 [2024-07-12 11:07:45.591162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.730 [2024-07-12 11:07:45.591193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.730 qpair failed and we were unable to recover it. 00:29:28.730 [2024-07-12 11:07:45.591505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.730 [2024-07-12 11:07:45.591540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.730 qpair failed and we were unable to recover it. 00:29:28.730 [2024-07-12 11:07:45.591561] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:28.730 [2024-07-12 11:07:45.591981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.730 [2024-07-12 11:07:45.592010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.730 qpair failed and we were unable to recover it. 00:29:28.730 [2024-07-12 11:07:45.592421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.730 [2024-07-12 11:07:45.592450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.730 qpair failed and we were unable to recover it. 00:29:28.730 [2024-07-12 11:07:45.592885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.730 [2024-07-12 11:07:45.592913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.730 qpair failed and we were unable to recover it. 00:29:28.730 [2024-07-12 11:07:45.593194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.730 [2024-07-12 11:07:45.593223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.730 qpair failed and we were unable to recover it. 00:29:28.730 [2024-07-12 11:07:45.593732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.730 [2024-07-12 11:07:45.593760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.730 qpair failed and we were unable to recover it. 00:29:28.730 [2024-07-12 11:07:45.594168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.730 [2024-07-12 11:07:45.594200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.730 qpair failed and we were unable to recover it. 00:29:28.730 [2024-07-12 11:07:45.594495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.730 [2024-07-12 11:07:45.594523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.730 qpair failed and we were unable to recover it. 00:29:28.730 [2024-07-12 11:07:45.594835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.730 [2024-07-12 11:07:45.594865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.730 qpair failed and we were unable to recover it. 00:29:28.730 [2024-07-12 11:07:45.595286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.730 [2024-07-12 11:07:45.595315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.730 qpair failed and we were unable to recover it. 00:29:28.730 [2024-07-12 11:07:45.595745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.730 [2024-07-12 11:07:45.595773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.730 qpair failed and we were unable to recover it. 00:29:28.730 [2024-07-12 11:07:45.596032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.730 [2024-07-12 11:07:45.596060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.730 qpair failed and we were unable to recover it. 00:29:28.730 [2024-07-12 11:07:45.596516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.730 [2024-07-12 11:07:45.596546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.730 qpair failed and we were unable to recover it. 00:29:28.730 [2024-07-12 11:07:45.597033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.730 [2024-07-12 11:07:45.597061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.730 qpair failed and we were unable to recover it. 00:29:28.730 [2024-07-12 11:07:45.597499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.730 [2024-07-12 11:07:45.597528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.730 qpair failed and we were unable to recover it. 00:29:28.730 [2024-07-12 11:07:45.597979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.730 [2024-07-12 11:07:45.598007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.730 qpair failed and we were unable to recover it. 00:29:28.730 [2024-07-12 11:07:45.598275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.730 [2024-07-12 11:07:45.598305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.730 qpair failed and we were unable to recover it. 00:29:28.730 [2024-07-12 11:07:45.598733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.730 [2024-07-12 11:07:45.598762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.730 qpair failed and we were unable to recover it. 00:29:28.730 [2024-07-12 11:07:45.599035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.730 [2024-07-12 11:07:45.599065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.730 qpair failed and we were unable to recover it. 00:29:28.730 [2024-07-12 11:07:45.599525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.730 [2024-07-12 11:07:45.599554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.730 qpair failed and we were unable to recover it. 00:29:28.730 [2024-07-12 11:07:45.600004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.730 [2024-07-12 11:07:45.600031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.730 qpair failed and we were unable to recover it. 00:29:28.730 [2024-07-12 11:07:45.600273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.730 [2024-07-12 11:07:45.600302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.730 qpair failed and we were unable to recover it. 00:29:28.730 11:07:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.730 [2024-07-12 11:07:45.600718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.731 [2024-07-12 11:07:45.600747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.731 qpair failed and we were unable to recover it. 00:29:28.731 11:07:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:28.731 [2024-07-12 11:07:45.601188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.731 [2024-07-12 11:07:45.601218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.731 qpair failed and we were unable to recover it. 00:29:28.731 11:07:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.731 11:07:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:28.731 [2024-07-12 11:07:45.601630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.731 [2024-07-12 11:07:45.601658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.731 qpair failed and we were unable to recover it. 00:29:28.731 [2024-07-12 11:07:45.602090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.731 [2024-07-12 11:07:45.602117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.731 qpair failed and we were unable to recover it. 00:29:28.731 [2024-07-12 11:07:45.602441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.731 [2024-07-12 11:07:45.602470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.731 qpair failed and we were unable to recover it. 00:29:28.731 [2024-07-12 11:07:45.602945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.731 [2024-07-12 11:07:45.602973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.731 qpair failed and we were unable to recover it. 00:29:28.731 [2024-07-12 11:07:45.603395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.731 [2024-07-12 11:07:45.603426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.731 qpair failed and we were unable to recover it. 00:29:28.731 [2024-07-12 11:07:45.603861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.731 [2024-07-12 11:07:45.603889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.731 qpair failed and we were unable to recover it. 00:29:28.731 [2024-07-12 11:07:45.604329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.731 [2024-07-12 11:07:45.604360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.731 qpair failed and we were unable to recover it. 00:29:28.731 [2024-07-12 11:07:45.604673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.731 [2024-07-12 11:07:45.604702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.731 qpair failed and we were unable to recover it. 00:29:28.731 [2024-07-12 11:07:45.605159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.731 [2024-07-12 11:07:45.605188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.731 qpair failed and we were unable to recover it. 00:29:28.731 [2024-07-12 11:07:45.605623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.731 [2024-07-12 11:07:45.605652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.731 qpair failed and we were unable to recover it. 00:29:28.731 [2024-07-12 11:07:45.605888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.731 [2024-07-12 11:07:45.605915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.731 qpair failed and we were unable to recover it. 00:29:28.731 [2024-07-12 11:07:45.606392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.731 [2024-07-12 11:07:45.606422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.731 qpair failed and we were unable to recover it. 00:29:28.731 [2024-07-12 11:07:45.606892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.731 [2024-07-12 11:07:45.606920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.731 qpair failed and we were unable to recover it. 00:29:28.731 [2024-07-12 11:07:45.607386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.731 [2024-07-12 11:07:45.607415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.731 qpair failed and we were unable to recover it. 00:29:28.731 [2024-07-12 11:07:45.607898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.731 [2024-07-12 11:07:45.607927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.731 qpair failed and we were unable to recover it. 00:29:28.731 [2024-07-12 11:07:45.608379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.731 [2024-07-12 11:07:45.608409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.731 qpair failed and we were unable to recover it. 00:29:28.731 [2024-07-12 11:07:45.608850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.731 [2024-07-12 11:07:45.608878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.731 qpair failed and we were unable to recover it. 00:29:28.731 [2024-07-12 11:07:45.609259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.731 [2024-07-12 11:07:45.609288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.731 qpair failed and we were unable to recover it. 00:29:28.731 [2024-07-12 11:07:45.609708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.731 [2024-07-12 11:07:45.609736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.731 qpair failed and we were unable to recover it. 00:29:28.731 [2024-07-12 11:07:45.609991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.731 [2024-07-12 11:07:45.610022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.731 qpair failed and we were unable to recover it. 00:29:28.731 [2024-07-12 11:07:45.610493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.731 [2024-07-12 11:07:45.610523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.731 qpair failed and we were unable to recover it. 00:29:28.731 [2024-07-12 11:07:45.610776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.732 [2024-07-12 11:07:45.610804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.732 qpair failed and we were unable to recover it. 00:29:28.732 [2024-07-12 11:07:45.611232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.732 [2024-07-12 11:07:45.611262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.732 qpair failed and we were unable to recover it. 00:29:28.732 [2024-07-12 11:07:45.611705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.732 [2024-07-12 11:07:45.611732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.732 qpair failed and we were unable to recover it. 00:29:28.732 [2024-07-12 11:07:45.612188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.732 [2024-07-12 11:07:45.612217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.732 qpair failed and we were unable to recover it. 00:29:28.732 11:07:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.732 [2024-07-12 11:07:45.612643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.732 [2024-07-12 11:07:45.612671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.732 qpair failed and we were unable to recover it. 00:29:28.732 11:07:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:28.732 [2024-07-12 11:07:45.613110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.732 [2024-07-12 11:07:45.613148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.732 qpair failed and we were unable to recover it. 00:29:28.732 11:07:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.732 11:07:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:28.732 [2024-07-12 11:07:45.613567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.732 [2024-07-12 11:07:45.613596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.732 qpair failed and we were unable to recover it. 00:29:28.732 [2024-07-12 11:07:45.613936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.732 [2024-07-12 11:07:45.613963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.732 qpair failed and we were unable to recover it. 00:29:28.732 [2024-07-12 11:07:45.614326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.732 [2024-07-12 11:07:45.614355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.732 qpair failed and we were unable to recover it. 00:29:28.732 [2024-07-12 11:07:45.614777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.732 [2024-07-12 11:07:45.614812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.732 qpair failed and we were unable to recover it. 00:29:28.732 [2024-07-12 11:07:45.615230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.732 [2024-07-12 11:07:45.615260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.732 qpair failed and we were unable to recover it. 00:29:28.732 [2024-07-12 11:07:45.615558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.732 [2024-07-12 11:07:45.615586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.732 qpair failed and we were unable to recover it. 00:29:28.732 [2024-07-12 11:07:45.615781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.732 [2024-07-12 11:07:45.615815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.732 qpair failed and we were unable to recover it. 00:29:28.732 [2024-07-12 11:07:45.616269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.732 [2024-07-12 11:07:45.616298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.732 qpair failed and we were unable to recover it. 00:29:28.732 [2024-07-12 11:07:45.616757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.732 [2024-07-12 11:07:45.616784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.732 qpair failed and we were unable to recover it. 00:29:28.732 [2024-07-12 11:07:45.617101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.732 [2024-07-12 11:07:45.617153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.732 qpair failed and we were unable to recover it. 00:29:28.732 [2024-07-12 11:07:45.617420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.732 [2024-07-12 11:07:45.617448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.732 qpair failed and we were unable to recover it. 00:29:28.732 [2024-07-12 11:07:45.617859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.732 [2024-07-12 11:07:45.617888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.732 qpair failed and we were unable to recover it. 00:29:28.732 [2024-07-12 11:07:45.618333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.732 [2024-07-12 11:07:45.618362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.732 qpair failed and we were unable to recover it. 00:29:28.732 [2024-07-12 11:07:45.618784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.732 [2024-07-12 11:07:45.618813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.732 qpair failed and we were unable to recover it. 00:29:28.732 [2024-07-12 11:07:45.619137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.732 [2024-07-12 11:07:45.619168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.732 qpair failed and we were unable to recover it. 00:29:28.732 [2024-07-12 11:07:45.619590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.732 [2024-07-12 11:07:45.619618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.732 qpair failed and we were unable to recover it. 00:29:28.732 [2024-07-12 11:07:45.620055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.732 [2024-07-12 11:07:45.620083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.732 qpair failed and we were unable to recover it. 00:29:28.732 [2024-07-12 11:07:45.620578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.732 [2024-07-12 11:07:45.620608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.732 qpair failed and we were unable to recover it. 00:29:28.732 [2024-07-12 11:07:45.620897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.732 [2024-07-12 11:07:45.620927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.732 qpair failed and we were unable to recover it. 00:29:28.732 [2024-07-12 11:07:45.621266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.732 [2024-07-12 11:07:45.621295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.732 qpair failed and we were unable to recover it. 00:29:28.732 [2024-07-12 11:07:45.621744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.732 [2024-07-12 11:07:45.621773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.732 qpair failed and we were unable to recover it. 00:29:28.732 [2024-07-12 11:07:45.622205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.732 [2024-07-12 11:07:45.622234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.732 qpair failed and we were unable to recover it. 00:29:28.732 [2024-07-12 11:07:45.622539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.732 [2024-07-12 11:07:45.622567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.732 qpair failed and we were unable to recover it. 00:29:28.733 [2024-07-12 11:07:45.622812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.733 [2024-07-12 11:07:45.622840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.733 qpair failed and we were unable to recover it. 00:29:28.733 [2024-07-12 11:07:45.623291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.733 [2024-07-12 11:07:45.623320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.733 qpair failed and we were unable to recover it. 00:29:28.733 [2024-07-12 11:07:45.623674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.733 [2024-07-12 11:07:45.623702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.733 qpair failed and we were unable to recover it. 00:29:28.733 [2024-07-12 11:07:45.623954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.733 [2024-07-12 11:07:45.623984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.733 qpair failed and we were unable to recover it. 00:29:28.733 [2024-07-12 11:07:45.624306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.733 [2024-07-12 11:07:45.624339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.733 qpair failed and we were unable to recover it. 00:29:28.733 [2024-07-12 11:07:45.624591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.733 [2024-07-12 11:07:45.624620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.733 qpair failed and we were unable to recover it. 00:29:28.733 11:07:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.733 [2024-07-12 11:07:45.625054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.733 11:07:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:28.733 [2024-07-12 11:07:45.625083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.733 qpair failed and we were unable to recover it. 00:29:28.733 [2024-07-12 11:07:45.625450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.733 11:07:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.733 [2024-07-12 11:07:45.625479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.733 qpair failed and we were unable to recover it. 00:29:28.733 11:07:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:28.733 [2024-07-12 11:07:45.625845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.733 [2024-07-12 11:07:45.625873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.733 qpair failed and we were unable to recover it. 00:29:28.733 [2024-07-12 11:07:45.626342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.733 [2024-07-12 11:07:45.626372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.733 qpair failed and we were unable to recover it. 00:29:28.733 [2024-07-12 11:07:45.626812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.733 [2024-07-12 11:07:45.626841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.733 qpair failed and we were unable to recover it. 00:29:28.733 [2024-07-12 11:07:45.627260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.733 [2024-07-12 11:07:45.627289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.733 qpair failed and we were unable to recover it. 00:29:28.733 [2024-07-12 11:07:45.627694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.733 [2024-07-12 11:07:45.627722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.733 qpair failed and we were unable to recover it. 00:29:28.733 [2024-07-12 11:07:45.628163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.733 [2024-07-12 11:07:45.628193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.733 qpair failed and we were unable to recover it. 00:29:28.733 [2024-07-12 11:07:45.628478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.733 [2024-07-12 11:07:45.628506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.733 qpair failed and we were unable to recover it. 00:29:28.733 [2024-07-12 11:07:45.628955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.733 [2024-07-12 11:07:45.628983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.733 qpair failed and we were unable to recover it. 00:29:28.733 [2024-07-12 11:07:45.629437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.733 [2024-07-12 11:07:45.629466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.733 qpair failed and we were unable to recover it. 00:29:28.733 [2024-07-12 11:07:45.629903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.733 [2024-07-12 11:07:45.629931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.733 qpair failed and we were unable to recover it. 00:29:28.733 [2024-07-12 11:07:45.630348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.733 [2024-07-12 11:07:45.630378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.733 qpair failed and we were unable to recover it. 00:29:28.733 [2024-07-12 11:07:45.630848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.733 [2024-07-12 11:07:45.630879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.733 qpair failed and we were unable to recover it. 00:29:28.733 [2024-07-12 11:07:45.631314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.733 [2024-07-12 11:07:45.631343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.733 qpair failed and we were unable to recover it. 00:29:28.733 [2024-07-12 11:07:45.631796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.733 [2024-07-12 11:07:45.631824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b90000b90 with addr=10.0.0.2, port=4420 00:29:28.733 qpair failed and we were unable to recover it. 00:29:28.733 [2024-07-12 11:07:45.631895] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:28.733 11:07:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.733 11:07:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:28.733 11:07:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.733 11:07:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:28.733 [2024-07-12 11:07:45.642715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.733 [2024-07-12 11:07:45.642922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.733 [2024-07-12 11:07:45.642980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.733 [2024-07-12 11:07:45.643004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.733 [2024-07-12 11:07:45.643024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:28.733 [2024-07-12 11:07:45.643078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.733 qpair failed and we were unable to recover it. 00:29:28.733 11:07:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.733 11:07:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2284801 00:29:28.733 [2024-07-12 11:07:45.652580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.734 [2024-07-12 11:07:45.652723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.734 [2024-07-12 11:07:45.652764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.734 [2024-07-12 11:07:45.652782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.734 [2024-07-12 11:07:45.652796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:28.734 [2024-07-12 11:07:45.652832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.734 qpair failed and we were unable to recover it. 00:29:28.734 [2024-07-12 11:07:45.662595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.734 [2024-07-12 11:07:45.662696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.734 [2024-07-12 11:07:45.662727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.734 [2024-07-12 11:07:45.662742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.734 [2024-07-12 11:07:45.662752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:28.734 [2024-07-12 11:07:45.662776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.734 qpair failed and we were unable to recover it. 00:29:28.734 [2024-07-12 11:07:45.672496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.734 [2024-07-12 11:07:45.672588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.734 [2024-07-12 11:07:45.672615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.734 [2024-07-12 11:07:45.672623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.734 [2024-07-12 11:07:45.672629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:28.734 [2024-07-12 11:07:45.672649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.734 qpair failed and we were unable to recover it. 00:29:28.734 [2024-07-12 11:07:45.682567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.734 [2024-07-12 11:07:45.682708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.734 [2024-07-12 11:07:45.682734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.734 [2024-07-12 11:07:45.682742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.734 [2024-07-12 11:07:45.682749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:28.734 [2024-07-12 11:07:45.682768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.734 qpair failed and we were unable to recover it. 00:29:28.997 [2024-07-12 11:07:45.692518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.997 [2024-07-12 11:07:45.692611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.997 [2024-07-12 11:07:45.692635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.997 [2024-07-12 11:07:45.692644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.997 [2024-07-12 11:07:45.692651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:28.997 [2024-07-12 11:07:45.692671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.997 qpair failed and we were unable to recover it. 00:29:28.997 [2024-07-12 11:07:45.702495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.997 [2024-07-12 11:07:45.702574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.997 [2024-07-12 11:07:45.702601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.997 [2024-07-12 11:07:45.702610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.997 [2024-07-12 11:07:45.702618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:28.997 [2024-07-12 11:07:45.702639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.997 qpair failed and we were unable to recover it. 00:29:28.997 [2024-07-12 11:07:45.712601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.997 [2024-07-12 11:07:45.712692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.997 [2024-07-12 11:07:45.712722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.997 [2024-07-12 11:07:45.712731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.997 [2024-07-12 11:07:45.712738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:28.997 [2024-07-12 11:07:45.712761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.997 qpair failed and we were unable to recover it. 00:29:28.997 [2024-07-12 11:07:45.722596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.997 [2024-07-12 11:07:45.722688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.997 [2024-07-12 11:07:45.722713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.997 [2024-07-12 11:07:45.722723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.997 [2024-07-12 11:07:45.722731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:28.997 [2024-07-12 11:07:45.722751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.997 qpair failed and we were unable to recover it. 00:29:28.997 [2024-07-12 11:07:45.732599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.997 [2024-07-12 11:07:45.732684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.997 [2024-07-12 11:07:45.732709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.997 [2024-07-12 11:07:45.732718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.997 [2024-07-12 11:07:45.732725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:28.997 [2024-07-12 11:07:45.732745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.997 qpair failed and we were unable to recover it. 00:29:28.997 [2024-07-12 11:07:45.742644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.997 [2024-07-12 11:07:45.742805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.997 [2024-07-12 11:07:45.742847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.997 [2024-07-12 11:07:45.742857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.997 [2024-07-12 11:07:45.742865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:28.997 [2024-07-12 11:07:45.742888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.997 qpair failed and we were unable to recover it. 00:29:28.997 [2024-07-12 11:07:45.752669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.997 [2024-07-12 11:07:45.752762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.997 [2024-07-12 11:07:45.752808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.997 [2024-07-12 11:07:45.752818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.997 [2024-07-12 11:07:45.752825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:28.997 [2024-07-12 11:07:45.752849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.997 qpair failed and we were unable to recover it. 00:29:28.997 [2024-07-12 11:07:45.762851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.997 [2024-07-12 11:07:45.762950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.997 [2024-07-12 11:07:45.762989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.997 [2024-07-12 11:07:45.762999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.997 [2024-07-12 11:07:45.763006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:28.997 [2024-07-12 11:07:45.763031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.997 qpair failed and we were unable to recover it. 00:29:28.997 [2024-07-12 11:07:45.772759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.997 [2024-07-12 11:07:45.772848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.997 [2024-07-12 11:07:45.772875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.997 [2024-07-12 11:07:45.772883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.997 [2024-07-12 11:07:45.772890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:28.997 [2024-07-12 11:07:45.772910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.997 qpair failed and we were unable to recover it. 00:29:28.998 [2024-07-12 11:07:45.782838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.998 [2024-07-12 11:07:45.782920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.998 [2024-07-12 11:07:45.782945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.998 [2024-07-12 11:07:45.782954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.998 [2024-07-12 11:07:45.782960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:28.998 [2024-07-12 11:07:45.782978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.998 qpair failed and we were unable to recover it. 00:29:28.998 [2024-07-12 11:07:45.792790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.998 [2024-07-12 11:07:45.792875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.998 [2024-07-12 11:07:45.792912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.998 [2024-07-12 11:07:45.792922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.998 [2024-07-12 11:07:45.792929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:28.998 [2024-07-12 11:07:45.792961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.998 qpair failed and we were unable to recover it. 00:29:28.998 [2024-07-12 11:07:45.802841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.998 [2024-07-12 11:07:45.802939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.998 [2024-07-12 11:07:45.802977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.998 [2024-07-12 11:07:45.802988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.998 [2024-07-12 11:07:45.802995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:28.998 [2024-07-12 11:07:45.803019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.998 qpair failed and we were unable to recover it. 00:29:28.998 [2024-07-12 11:07:45.812875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.998 [2024-07-12 11:07:45.812968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.998 [2024-07-12 11:07:45.812994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.998 [2024-07-12 11:07:45.813002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.998 [2024-07-12 11:07:45.813009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:28.998 [2024-07-12 11:07:45.813029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.998 qpair failed and we were unable to recover it. 00:29:28.998 [2024-07-12 11:07:45.822808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.998 [2024-07-12 11:07:45.822900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.998 [2024-07-12 11:07:45.822926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.998 [2024-07-12 11:07:45.822934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.998 [2024-07-12 11:07:45.822940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:28.998 [2024-07-12 11:07:45.822959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.998 qpair failed and we were unable to recover it. 00:29:28.998 [2024-07-12 11:07:45.832928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.998 [2024-07-12 11:07:45.833017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.998 [2024-07-12 11:07:45.833041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.998 [2024-07-12 11:07:45.833048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.998 [2024-07-12 11:07:45.833054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:28.998 [2024-07-12 11:07:45.833073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.998 qpair failed and we were unable to recover it. 00:29:28.998 [2024-07-12 11:07:45.842968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.998 [2024-07-12 11:07:45.843068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.998 [2024-07-12 11:07:45.843099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.998 [2024-07-12 11:07:45.843107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.998 [2024-07-12 11:07:45.843114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:28.998 [2024-07-12 11:07:45.843140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.998 qpair failed and we were unable to recover it. 00:29:28.998 [2024-07-12 11:07:45.852986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.998 [2024-07-12 11:07:45.853065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.998 [2024-07-12 11:07:45.853092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.998 [2024-07-12 11:07:45.853100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.998 [2024-07-12 11:07:45.853106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:28.998 [2024-07-12 11:07:45.853130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.998 qpair failed and we were unable to recover it. 00:29:28.998 [2024-07-12 11:07:45.863069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.998 [2024-07-12 11:07:45.863166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.998 [2024-07-12 11:07:45.863192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.998 [2024-07-12 11:07:45.863200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.998 [2024-07-12 11:07:45.863207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:28.998 [2024-07-12 11:07:45.863226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.998 qpair failed and we were unable to recover it. 00:29:28.998 [2024-07-12 11:07:45.872960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.998 [2024-07-12 11:07:45.873084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.998 [2024-07-12 11:07:45.873109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.998 [2024-07-12 11:07:45.873118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.998 [2024-07-12 11:07:45.873130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:28.998 [2024-07-12 11:07:45.873149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.998 qpair failed and we were unable to recover it. 00:29:28.998 [2024-07-12 11:07:45.883106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.998 [2024-07-12 11:07:45.883198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.998 [2024-07-12 11:07:45.883223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.998 [2024-07-12 11:07:45.883231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.998 [2024-07-12 11:07:45.883244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:28.998 [2024-07-12 11:07:45.883263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.998 qpair failed and we were unable to recover it. 00:29:28.998 [2024-07-12 11:07:45.893153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.999 [2024-07-12 11:07:45.893233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.999 [2024-07-12 11:07:45.893259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.999 [2024-07-12 11:07:45.893267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.999 [2024-07-12 11:07:45.893273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:28.999 [2024-07-12 11:07:45.893292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.999 qpair failed and we were unable to recover it. 00:29:28.999 [2024-07-12 11:07:45.903201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.999 [2024-07-12 11:07:45.903281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.999 [2024-07-12 11:07:45.903306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.999 [2024-07-12 11:07:45.903314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.999 [2024-07-12 11:07:45.903321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:28.999 [2024-07-12 11:07:45.903339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.999 qpair failed and we were unable to recover it. 00:29:28.999 [2024-07-12 11:07:45.913413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.999 [2024-07-12 11:07:45.913507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.999 [2024-07-12 11:07:45.913532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.999 [2024-07-12 11:07:45.913540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.999 [2024-07-12 11:07:45.913546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:28.999 [2024-07-12 11:07:45.913566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.999 qpair failed and we were unable to recover it. 00:29:28.999 [2024-07-12 11:07:45.923299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.999 [2024-07-12 11:07:45.923399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.999 [2024-07-12 11:07:45.923425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.999 [2024-07-12 11:07:45.923433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.999 [2024-07-12 11:07:45.923440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:28.999 [2024-07-12 11:07:45.923460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.999 qpair failed and we were unable to recover it. 00:29:28.999 [2024-07-12 11:07:45.933207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.999 [2024-07-12 11:07:45.933294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.999 [2024-07-12 11:07:45.933318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.999 [2024-07-12 11:07:45.933326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.999 [2024-07-12 11:07:45.933332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:28.999 [2024-07-12 11:07:45.933351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.999 qpair failed and we were unable to recover it. 00:29:28.999 [2024-07-12 11:07:45.943350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.999 [2024-07-12 11:07:45.943444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.999 [2024-07-12 11:07:45.943468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.999 [2024-07-12 11:07:45.943475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.999 [2024-07-12 11:07:45.943482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:28.999 [2024-07-12 11:07:45.943501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.999 qpair failed and we were unable to recover it. 00:29:28.999 [2024-07-12 11:07:45.953330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.999 [2024-07-12 11:07:45.953413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.999 [2024-07-12 11:07:45.953439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.999 [2024-07-12 11:07:45.953446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.999 [2024-07-12 11:07:45.953453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:28.999 [2024-07-12 11:07:45.953471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.999 qpair failed and we were unable to recover it. 00:29:28.999 [2024-07-12 11:07:45.963333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.999 [2024-07-12 11:07:45.963434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.999 [2024-07-12 11:07:45.963458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.999 [2024-07-12 11:07:45.963466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.999 [2024-07-12 11:07:45.963473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:28.999 [2024-07-12 11:07:45.963491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.999 qpair failed and we were unable to recover it. 00:29:28.999 [2024-07-12 11:07:45.973371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.999 [2024-07-12 11:07:45.973450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.999 [2024-07-12 11:07:45.973474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.999 [2024-07-12 11:07:45.973482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.999 [2024-07-12 11:07:45.973495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:28.999 [2024-07-12 11:07:45.973513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.999 qpair failed and we were unable to recover it. 00:29:29.262 [2024-07-12 11:07:45.983386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.262 [2024-07-12 11:07:45.983477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.262 [2024-07-12 11:07:45.983502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.262 [2024-07-12 11:07:45.983510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.262 [2024-07-12 11:07:45.983516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.262 [2024-07-12 11:07:45.983534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.262 qpair failed and we were unable to recover it. 00:29:29.262 [2024-07-12 11:07:45.993439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.262 [2024-07-12 11:07:45.993518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.262 [2024-07-12 11:07:45.993544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.262 [2024-07-12 11:07:45.993552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.263 [2024-07-12 11:07:45.993559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.263 [2024-07-12 11:07:45.993577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-07-12 11:07:46.003461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.263 [2024-07-12 11:07:46.003551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.263 [2024-07-12 11:07:46.003576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.263 [2024-07-12 11:07:46.003584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.263 [2024-07-12 11:07:46.003590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.263 [2024-07-12 11:07:46.003608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-07-12 11:07:46.013513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.263 [2024-07-12 11:07:46.013590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.263 [2024-07-12 11:07:46.013615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.263 [2024-07-12 11:07:46.013623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.263 [2024-07-12 11:07:46.013629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.263 [2024-07-12 11:07:46.013647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-07-12 11:07:46.023558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.263 [2024-07-12 11:07:46.023642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.263 [2024-07-12 11:07:46.023666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.263 [2024-07-12 11:07:46.023675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.263 [2024-07-12 11:07:46.023681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.263 [2024-07-12 11:07:46.023700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-07-12 11:07:46.033627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.263 [2024-07-12 11:07:46.033708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.263 [2024-07-12 11:07:46.033732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.263 [2024-07-12 11:07:46.033740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.263 [2024-07-12 11:07:46.033746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.263 [2024-07-12 11:07:46.033764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-07-12 11:07:46.043612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.263 [2024-07-12 11:07:46.043708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.263 [2024-07-12 11:07:46.043733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.263 [2024-07-12 11:07:46.043742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.263 [2024-07-12 11:07:46.043749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.263 [2024-07-12 11:07:46.043767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-07-12 11:07:46.053594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.263 [2024-07-12 11:07:46.053678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.263 [2024-07-12 11:07:46.053715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.263 [2024-07-12 11:07:46.053725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.263 [2024-07-12 11:07:46.053732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.263 [2024-07-12 11:07:46.053758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-07-12 11:07:46.063602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.263 [2024-07-12 11:07:46.063685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.263 [2024-07-12 11:07:46.063723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.263 [2024-07-12 11:07:46.063740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.263 [2024-07-12 11:07:46.063748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.263 [2024-07-12 11:07:46.063772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-07-12 11:07:46.073680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.263 [2024-07-12 11:07:46.073779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.263 [2024-07-12 11:07:46.073816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.263 [2024-07-12 11:07:46.073827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.263 [2024-07-12 11:07:46.073834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.263 [2024-07-12 11:07:46.073858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-07-12 11:07:46.083687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.263 [2024-07-12 11:07:46.083788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.263 [2024-07-12 11:07:46.083826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.263 [2024-07-12 11:07:46.083836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.263 [2024-07-12 11:07:46.083843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.263 [2024-07-12 11:07:46.083867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-07-12 11:07:46.093621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.263 [2024-07-12 11:07:46.093717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.263 [2024-07-12 11:07:46.093755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.263 [2024-07-12 11:07:46.093765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.263 [2024-07-12 11:07:46.093772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.263 [2024-07-12 11:07:46.093796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-07-12 11:07:46.103811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.263 [2024-07-12 11:07:46.103894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.263 [2024-07-12 11:07:46.103922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.263 [2024-07-12 11:07:46.103932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.263 [2024-07-12 11:07:46.103938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.263 [2024-07-12 11:07:46.103958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-07-12 11:07:46.113782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.263 [2024-07-12 11:07:46.113860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.263 [2024-07-12 11:07:46.113886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.263 [2024-07-12 11:07:46.113894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.263 [2024-07-12 11:07:46.113901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.263 [2024-07-12 11:07:46.113920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-07-12 11:07:46.123896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.263 [2024-07-12 11:07:46.123989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.263 [2024-07-12 11:07:46.124014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.263 [2024-07-12 11:07:46.124023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.263 [2024-07-12 11:07:46.124030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.263 [2024-07-12 11:07:46.124050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-07-12 11:07:46.133845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.263 [2024-07-12 11:07:46.133933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.263 [2024-07-12 11:07:46.133959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.263 [2024-07-12 11:07:46.133967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.263 [2024-07-12 11:07:46.133973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.263 [2024-07-12 11:07:46.133992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.264 [2024-07-12 11:07:46.143872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.264 [2024-07-12 11:07:46.144032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.264 [2024-07-12 11:07:46.144057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.264 [2024-07-12 11:07:46.144065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.264 [2024-07-12 11:07:46.144071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.264 [2024-07-12 11:07:46.144090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-07-12 11:07:46.153809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.264 [2024-07-12 11:07:46.153905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.264 [2024-07-12 11:07:46.153936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.264 [2024-07-12 11:07:46.153945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.264 [2024-07-12 11:07:46.153951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.264 [2024-07-12 11:07:46.153971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-07-12 11:07:46.163982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.264 [2024-07-12 11:07:46.164082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.264 [2024-07-12 11:07:46.164107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.264 [2024-07-12 11:07:46.164116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.264 [2024-07-12 11:07:46.164126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.264 [2024-07-12 11:07:46.164146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-07-12 11:07:46.173961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.264 [2024-07-12 11:07:46.174043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.264 [2024-07-12 11:07:46.174068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.264 [2024-07-12 11:07:46.174076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.264 [2024-07-12 11:07:46.174083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.264 [2024-07-12 11:07:46.174101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-07-12 11:07:46.183987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.264 [2024-07-12 11:07:46.184112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.264 [2024-07-12 11:07:46.184146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.264 [2024-07-12 11:07:46.184154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.264 [2024-07-12 11:07:46.184161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.264 [2024-07-12 11:07:46.184180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-07-12 11:07:46.194017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.264 [2024-07-12 11:07:46.194095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.264 [2024-07-12 11:07:46.194120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.264 [2024-07-12 11:07:46.194133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.264 [2024-07-12 11:07:46.194139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.264 [2024-07-12 11:07:46.194165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-07-12 11:07:46.204051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.264 [2024-07-12 11:07:46.204155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.264 [2024-07-12 11:07:46.204180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.264 [2024-07-12 11:07:46.204188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.264 [2024-07-12 11:07:46.204195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.264 [2024-07-12 11:07:46.204214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-07-12 11:07:46.214129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.264 [2024-07-12 11:07:46.214246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.264 [2024-07-12 11:07:46.214271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.264 [2024-07-12 11:07:46.214279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.264 [2024-07-12 11:07:46.214286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.264 [2024-07-12 11:07:46.214306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-07-12 11:07:46.224182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.264 [2024-07-12 11:07:46.224273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.264 [2024-07-12 11:07:46.224298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.264 [2024-07-12 11:07:46.224307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.264 [2024-07-12 11:07:46.224313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.264 [2024-07-12 11:07:46.224333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-07-12 11:07:46.234161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.264 [2024-07-12 11:07:46.234242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.264 [2024-07-12 11:07:46.234269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.264 [2024-07-12 11:07:46.234277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.264 [2024-07-12 11:07:46.234283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.264 [2024-07-12 11:07:46.234303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-07-12 11:07:46.244213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.264 [2024-07-12 11:07:46.244307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.264 [2024-07-12 11:07:46.244338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.264 [2024-07-12 11:07:46.244347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.264 [2024-07-12 11:07:46.244353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.264 [2024-07-12 11:07:46.244372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.527 [2024-07-12 11:07:46.254202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.527 [2024-07-12 11:07:46.254284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.527 [2024-07-12 11:07:46.254309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.527 [2024-07-12 11:07:46.254317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.527 [2024-07-12 11:07:46.254323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.527 [2024-07-12 11:07:46.254342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.527 qpair failed and we were unable to recover it. 00:29:29.527 [2024-07-12 11:07:46.264243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.527 [2024-07-12 11:07:46.264328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.527 [2024-07-12 11:07:46.264353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.527 [2024-07-12 11:07:46.264361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.527 [2024-07-12 11:07:46.264367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.527 [2024-07-12 11:07:46.264386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.527 qpair failed and we were unable to recover it. 00:29:29.527 [2024-07-12 11:07:46.274267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.527 [2024-07-12 11:07:46.274351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.527 [2024-07-12 11:07:46.274374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.527 [2024-07-12 11:07:46.274382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.527 [2024-07-12 11:07:46.274389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.527 [2024-07-12 11:07:46.274407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.527 qpair failed and we were unable to recover it. 00:29:29.527 [2024-07-12 11:07:46.284318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.527 [2024-07-12 11:07:46.284409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.527 [2024-07-12 11:07:46.284433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.527 [2024-07-12 11:07:46.284441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.527 [2024-07-12 11:07:46.284448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.527 [2024-07-12 11:07:46.284473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.527 qpair failed and we were unable to recover it. 00:29:29.527 [2024-07-12 11:07:46.294302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.527 [2024-07-12 11:07:46.294378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.527 [2024-07-12 11:07:46.294403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.527 [2024-07-12 11:07:46.294412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.527 [2024-07-12 11:07:46.294418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.527 [2024-07-12 11:07:46.294436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.527 qpair failed and we were unable to recover it. 00:29:29.527 [2024-07-12 11:07:46.304254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.527 [2024-07-12 11:07:46.304409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.527 [2024-07-12 11:07:46.304436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.527 [2024-07-12 11:07:46.304444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.527 [2024-07-12 11:07:46.304451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.527 [2024-07-12 11:07:46.304469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.527 qpair failed and we were unable to recover it. 00:29:29.527 [2024-07-12 11:07:46.314425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.527 [2024-07-12 11:07:46.314584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.527 [2024-07-12 11:07:46.314608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.527 [2024-07-12 11:07:46.314615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.527 [2024-07-12 11:07:46.314621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.527 [2024-07-12 11:07:46.314640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.527 qpair failed and we were unable to recover it. 00:29:29.527 [2024-07-12 11:07:46.324421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.527 [2024-07-12 11:07:46.324508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.527 [2024-07-12 11:07:46.324531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.527 [2024-07-12 11:07:46.324539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.527 [2024-07-12 11:07:46.324545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.527 [2024-07-12 11:07:46.324564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.527 qpair failed and we were unable to recover it. 00:29:29.527 [2024-07-12 11:07:46.334430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.527 [2024-07-12 11:07:46.334518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.527 [2024-07-12 11:07:46.334542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.528 [2024-07-12 11:07:46.334550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.528 [2024-07-12 11:07:46.334557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.528 [2024-07-12 11:07:46.334574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.528 qpair failed and we were unable to recover it. 00:29:29.528 [2024-07-12 11:07:46.344499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.528 [2024-07-12 11:07:46.344588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.528 [2024-07-12 11:07:46.344613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.528 [2024-07-12 11:07:46.344621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.528 [2024-07-12 11:07:46.344627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.528 [2024-07-12 11:07:46.344645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.528 qpair failed and we were unable to recover it. 00:29:29.528 [2024-07-12 11:07:46.354486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.528 [2024-07-12 11:07:46.354573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.528 [2024-07-12 11:07:46.354598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.528 [2024-07-12 11:07:46.354606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.528 [2024-07-12 11:07:46.354612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.528 [2024-07-12 11:07:46.354629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.528 qpair failed and we were unable to recover it. 00:29:29.528 [2024-07-12 11:07:46.364576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.528 [2024-07-12 11:07:46.364700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.528 [2024-07-12 11:07:46.364725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.528 [2024-07-12 11:07:46.364732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.528 [2024-07-12 11:07:46.364738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.528 [2024-07-12 11:07:46.364756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.528 qpair failed and we were unable to recover it. 00:29:29.528 [2024-07-12 11:07:46.374585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.528 [2024-07-12 11:07:46.374669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.528 [2024-07-12 11:07:46.374697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.528 [2024-07-12 11:07:46.374706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.528 [2024-07-12 11:07:46.374725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.528 [2024-07-12 11:07:46.374746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.528 qpair failed and we were unable to recover it. 00:29:29.528 [2024-07-12 11:07:46.384585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.528 [2024-07-12 11:07:46.384715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.528 [2024-07-12 11:07:46.384753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.528 [2024-07-12 11:07:46.384763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.528 [2024-07-12 11:07:46.384770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.528 [2024-07-12 11:07:46.384794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.528 qpair failed and we were unable to recover it. 00:29:29.528 [2024-07-12 11:07:46.394616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.528 [2024-07-12 11:07:46.394732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.528 [2024-07-12 11:07:46.394760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.528 [2024-07-12 11:07:46.394769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.528 [2024-07-12 11:07:46.394775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.528 [2024-07-12 11:07:46.394796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.528 qpair failed and we were unable to recover it. 00:29:29.528 [2024-07-12 11:07:46.404651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.528 [2024-07-12 11:07:46.404751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.528 [2024-07-12 11:07:46.404776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.528 [2024-07-12 11:07:46.404784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.528 [2024-07-12 11:07:46.404791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.528 [2024-07-12 11:07:46.404810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.528 qpair failed and we were unable to recover it. 00:29:29.528 [2024-07-12 11:07:46.414711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.528 [2024-07-12 11:07:46.414799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.528 [2024-07-12 11:07:46.414836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.528 [2024-07-12 11:07:46.414846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.528 [2024-07-12 11:07:46.414853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.528 [2024-07-12 11:07:46.414877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.528 qpair failed and we were unable to recover it. 00:29:29.528 [2024-07-12 11:07:46.424701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.528 [2024-07-12 11:07:46.424786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.528 [2024-07-12 11:07:46.424816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.528 [2024-07-12 11:07:46.424826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.528 [2024-07-12 11:07:46.424832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.528 [2024-07-12 11:07:46.424853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.528 qpair failed and we were unable to recover it. 00:29:29.528 [2024-07-12 11:07:46.434778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.528 [2024-07-12 11:07:46.434881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.528 [2024-07-12 11:07:46.434920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.528 [2024-07-12 11:07:46.434930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.528 [2024-07-12 11:07:46.434937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.528 [2024-07-12 11:07:46.434961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.528 qpair failed and we were unable to recover it. 00:29:29.528 [2024-07-12 11:07:46.444890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.528 [2024-07-12 11:07:46.444999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.528 [2024-07-12 11:07:46.445037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.528 [2024-07-12 11:07:46.445047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.528 [2024-07-12 11:07:46.445054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.528 [2024-07-12 11:07:46.445080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.528 qpair failed and we were unable to recover it. 00:29:29.528 [2024-07-12 11:07:46.454828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.528 [2024-07-12 11:07:46.454915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.528 [2024-07-12 11:07:46.454943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.528 [2024-07-12 11:07:46.454951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.528 [2024-07-12 11:07:46.454958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.528 [2024-07-12 11:07:46.454978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.528 qpair failed and we were unable to recover it. 00:29:29.528 [2024-07-12 11:07:46.464745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.528 [2024-07-12 11:07:46.464834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.528 [2024-07-12 11:07:46.464861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.528 [2024-07-12 11:07:46.464876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.528 [2024-07-12 11:07:46.464883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.528 [2024-07-12 11:07:46.464903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.528 qpair failed and we were unable to recover it. 00:29:29.528 [2024-07-12 11:07:46.474765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.528 [2024-07-12 11:07:46.474855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.528 [2024-07-12 11:07:46.474882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.528 [2024-07-12 11:07:46.474890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.529 [2024-07-12 11:07:46.474897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.529 [2024-07-12 11:07:46.474918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.529 qpair failed and we were unable to recover it. 00:29:29.529 [2024-07-12 11:07:46.484894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.529 [2024-07-12 11:07:46.484986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.529 [2024-07-12 11:07:46.485012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.529 [2024-07-12 11:07:46.485021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.529 [2024-07-12 11:07:46.485027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.529 [2024-07-12 11:07:46.485046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.529 qpair failed and we were unable to recover it. 00:29:29.529 [2024-07-12 11:07:46.494901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.529 [2024-07-12 11:07:46.494980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.529 [2024-07-12 11:07:46.495005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.529 [2024-07-12 11:07:46.495013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.529 [2024-07-12 11:07:46.495019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.529 [2024-07-12 11:07:46.495038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.529 qpair failed and we were unable to recover it. 00:29:29.529 [2024-07-12 11:07:46.504867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.529 [2024-07-12 11:07:46.504969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.529 [2024-07-12 11:07:46.504994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.529 [2024-07-12 11:07:46.505002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.529 [2024-07-12 11:07:46.505009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.529 [2024-07-12 11:07:46.505027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.529 qpair failed and we were unable to recover it. 00:29:29.791 [2024-07-12 11:07:46.515005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.791 [2024-07-12 11:07:46.515091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.791 [2024-07-12 11:07:46.515115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.791 [2024-07-12 11:07:46.515131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.791 [2024-07-12 11:07:46.515138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.791 [2024-07-12 11:07:46.515157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-07-12 11:07:46.525034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.791 [2024-07-12 11:07:46.525128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.791 [2024-07-12 11:07:46.525153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.791 [2024-07-12 11:07:46.525161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.791 [2024-07-12 11:07:46.525168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.791 [2024-07-12 11:07:46.525187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-07-12 11:07:46.535066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.791 [2024-07-12 11:07:46.535152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.791 [2024-07-12 11:07:46.535177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.791 [2024-07-12 11:07:46.535185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.791 [2024-07-12 11:07:46.535191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.791 [2024-07-12 11:07:46.535210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-07-12 11:07:46.545074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.791 [2024-07-12 11:07:46.545169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.791 [2024-07-12 11:07:46.545194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.791 [2024-07-12 11:07:46.545202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.791 [2024-07-12 11:07:46.545209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.791 [2024-07-12 11:07:46.545228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-07-12 11:07:46.555035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.791 [2024-07-12 11:07:46.555160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.791 [2024-07-12 11:07:46.555184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.791 [2024-07-12 11:07:46.555199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.791 [2024-07-12 11:07:46.555205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.791 [2024-07-12 11:07:46.555224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-07-12 11:07:46.565135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.791 [2024-07-12 11:07:46.565227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.791 [2024-07-12 11:07:46.565252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.791 [2024-07-12 11:07:46.565260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.791 [2024-07-12 11:07:46.565266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.791 [2024-07-12 11:07:46.565285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-07-12 11:07:46.575161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.791 [2024-07-12 11:07:46.575258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.791 [2024-07-12 11:07:46.575281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.791 [2024-07-12 11:07:46.575289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.791 [2024-07-12 11:07:46.575296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.791 [2024-07-12 11:07:46.575315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-07-12 11:07:46.585256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.791 [2024-07-12 11:07:46.585368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.791 [2024-07-12 11:07:46.585393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.791 [2024-07-12 11:07:46.585401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.791 [2024-07-12 11:07:46.585408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.791 [2024-07-12 11:07:46.585426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-07-12 11:07:46.595183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.791 [2024-07-12 11:07:46.595289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.791 [2024-07-12 11:07:46.595313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.791 [2024-07-12 11:07:46.595321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.791 [2024-07-12 11:07:46.595327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.791 [2024-07-12 11:07:46.595346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-07-12 11:07:46.605292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.791 [2024-07-12 11:07:46.605392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.791 [2024-07-12 11:07:46.605417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.791 [2024-07-12 11:07:46.605425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.791 [2024-07-12 11:07:46.605432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.791 [2024-07-12 11:07:46.605451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-07-12 11:07:46.615248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.791 [2024-07-12 11:07:46.615330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.791 [2024-07-12 11:07:46.615353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.791 [2024-07-12 11:07:46.615361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.791 [2024-07-12 11:07:46.615370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.791 [2024-07-12 11:07:46.615388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-07-12 11:07:46.625329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.792 [2024-07-12 11:07:46.625415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.792 [2024-07-12 11:07:46.625442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.792 [2024-07-12 11:07:46.625450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.792 [2024-07-12 11:07:46.625457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.792 [2024-07-12 11:07:46.625476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-07-12 11:07:46.635246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.792 [2024-07-12 11:07:46.635328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.792 [2024-07-12 11:07:46.635353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.792 [2024-07-12 11:07:46.635361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.792 [2024-07-12 11:07:46.635367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.792 [2024-07-12 11:07:46.635386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-07-12 11:07:46.645420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.792 [2024-07-12 11:07:46.645515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.792 [2024-07-12 11:07:46.645546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.792 [2024-07-12 11:07:46.645554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.792 [2024-07-12 11:07:46.645561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.792 [2024-07-12 11:07:46.645579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-07-12 11:07:46.655395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.792 [2024-07-12 11:07:46.655510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.792 [2024-07-12 11:07:46.655537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.792 [2024-07-12 11:07:46.655545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.792 [2024-07-12 11:07:46.655551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.792 [2024-07-12 11:07:46.655571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-07-12 11:07:46.665494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.792 [2024-07-12 11:07:46.665610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.792 [2024-07-12 11:07:46.665635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.792 [2024-07-12 11:07:46.665643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.792 [2024-07-12 11:07:46.665650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.792 [2024-07-12 11:07:46.665668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-07-12 11:07:46.675493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.792 [2024-07-12 11:07:46.675576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.792 [2024-07-12 11:07:46.675602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.792 [2024-07-12 11:07:46.675610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.792 [2024-07-12 11:07:46.675616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.792 [2024-07-12 11:07:46.675638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-07-12 11:07:46.685518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.792 [2024-07-12 11:07:46.685610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.792 [2024-07-12 11:07:46.685635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.792 [2024-07-12 11:07:46.685643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.792 [2024-07-12 11:07:46.685649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.792 [2024-07-12 11:07:46.685675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-07-12 11:07:46.695522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.792 [2024-07-12 11:07:46.695599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.792 [2024-07-12 11:07:46.695626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.792 [2024-07-12 11:07:46.695634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.792 [2024-07-12 11:07:46.695641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.792 [2024-07-12 11:07:46.695660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-07-12 11:07:46.705579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.792 [2024-07-12 11:07:46.705662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.792 [2024-07-12 11:07:46.705688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.792 [2024-07-12 11:07:46.705696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.792 [2024-07-12 11:07:46.705702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.792 [2024-07-12 11:07:46.705721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-07-12 11:07:46.715680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.792 [2024-07-12 11:07:46.715763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.792 [2024-07-12 11:07:46.715793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.792 [2024-07-12 11:07:46.715801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.792 [2024-07-12 11:07:46.715808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.792 [2024-07-12 11:07:46.715831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-07-12 11:07:46.725637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.792 [2024-07-12 11:07:46.725738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.792 [2024-07-12 11:07:46.725777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.792 [2024-07-12 11:07:46.725788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.792 [2024-07-12 11:07:46.725795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.792 [2024-07-12 11:07:46.725818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-07-12 11:07:46.735661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.792 [2024-07-12 11:07:46.735755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.792 [2024-07-12 11:07:46.735793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.792 [2024-07-12 11:07:46.735803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.792 [2024-07-12 11:07:46.735815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.792 [2024-07-12 11:07:46.735838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-07-12 11:07:46.745665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.793 [2024-07-12 11:07:46.745743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.793 [2024-07-12 11:07:46.745771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.793 [2024-07-12 11:07:46.745779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.793 [2024-07-12 11:07:46.745786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.793 [2024-07-12 11:07:46.745805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-07-12 11:07:46.755693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.793 [2024-07-12 11:07:46.755781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.793 [2024-07-12 11:07:46.755806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.793 [2024-07-12 11:07:46.755814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.793 [2024-07-12 11:07:46.755821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.793 [2024-07-12 11:07:46.755840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-07-12 11:07:46.765721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.793 [2024-07-12 11:07:46.765831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.793 [2024-07-12 11:07:46.765871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.793 [2024-07-12 11:07:46.765883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.793 [2024-07-12 11:07:46.765890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:29.793 [2024-07-12 11:07:46.765916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.793 qpair failed and we were unable to recover it. 00:29:30.055 [2024-07-12 11:07:46.775655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.055 [2024-07-12 11:07:46.775748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.055 [2024-07-12 11:07:46.775779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.055 [2024-07-12 11:07:46.775789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.055 [2024-07-12 11:07:46.775804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.055 [2024-07-12 11:07:46.775829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.055 qpair failed and we were unable to recover it. 00:29:30.055 [2024-07-12 11:07:46.785818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.055 [2024-07-12 11:07:46.785968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.055 [2024-07-12 11:07:46.786007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.055 [2024-07-12 11:07:46.786017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.055 [2024-07-12 11:07:46.786025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.055 [2024-07-12 11:07:46.786051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.055 qpair failed and we were unable to recover it. 00:29:30.055 [2024-07-12 11:07:46.795842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.055 [2024-07-12 11:07:46.795927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.055 [2024-07-12 11:07:46.795953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.055 [2024-07-12 11:07:46.795962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.055 [2024-07-12 11:07:46.795969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.055 [2024-07-12 11:07:46.795989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.055 qpair failed and we were unable to recover it. 00:29:30.055 [2024-07-12 11:07:46.805872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.055 [2024-07-12 11:07:46.805971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.055 [2024-07-12 11:07:46.805998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.055 [2024-07-12 11:07:46.806007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.055 [2024-07-12 11:07:46.806013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.055 [2024-07-12 11:07:46.806033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.055 qpair failed and we were unable to recover it. 00:29:30.055 [2024-07-12 11:07:46.815910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.055 [2024-07-12 11:07:46.815988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.055 [2024-07-12 11:07:46.816013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.055 [2024-07-12 11:07:46.816021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.055 [2024-07-12 11:07:46.816027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.055 [2024-07-12 11:07:46.816046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.055 qpair failed and we were unable to recover it. 00:29:30.055 [2024-07-12 11:07:46.825934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.055 [2024-07-12 11:07:46.826019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.055 [2024-07-12 11:07:46.826045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.055 [2024-07-12 11:07:46.826053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.055 [2024-07-12 11:07:46.826059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.055 [2024-07-12 11:07:46.826079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.055 qpair failed and we were unable to recover it. 00:29:30.055 [2024-07-12 11:07:46.835967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.055 [2024-07-12 11:07:46.836048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.055 [2024-07-12 11:07:46.836073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.055 [2024-07-12 11:07:46.836080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.055 [2024-07-12 11:07:46.836087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.055 [2024-07-12 11:07:46.836107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.055 qpair failed and we were unable to recover it. 00:29:30.055 [2024-07-12 11:07:46.846019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.055 [2024-07-12 11:07:46.846113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.055 [2024-07-12 11:07:46.846152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.055 [2024-07-12 11:07:46.846161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.055 [2024-07-12 11:07:46.846167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.055 [2024-07-12 11:07:46.846187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.055 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-12 11:07:46.855997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.056 [2024-07-12 11:07:46.856094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.056 [2024-07-12 11:07:46.856119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.056 [2024-07-12 11:07:46.856138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.056 [2024-07-12 11:07:46.856145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.056 [2024-07-12 11:07:46.856164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-12 11:07:46.866030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.056 [2024-07-12 11:07:46.866203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.056 [2024-07-12 11:07:46.866229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.056 [2024-07-12 11:07:46.866244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.056 [2024-07-12 11:07:46.866252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.056 [2024-07-12 11:07:46.866272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-12 11:07:46.876043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.056 [2024-07-12 11:07:46.876139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.056 [2024-07-12 11:07:46.876164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.056 [2024-07-12 11:07:46.876172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.056 [2024-07-12 11:07:46.876178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.056 [2024-07-12 11:07:46.876198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-12 11:07:46.886156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.056 [2024-07-12 11:07:46.886289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.056 [2024-07-12 11:07:46.886314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.056 [2024-07-12 11:07:46.886322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.056 [2024-07-12 11:07:46.886328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.056 [2024-07-12 11:07:46.886347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-12 11:07:46.896187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.056 [2024-07-12 11:07:46.896284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.056 [2024-07-12 11:07:46.896309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.056 [2024-07-12 11:07:46.896317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.056 [2024-07-12 11:07:46.896323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.056 [2024-07-12 11:07:46.896343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-12 11:07:46.906195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.056 [2024-07-12 11:07:46.906313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.056 [2024-07-12 11:07:46.906338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.056 [2024-07-12 11:07:46.906346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.056 [2024-07-12 11:07:46.906352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.056 [2024-07-12 11:07:46.906370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-12 11:07:46.916237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.056 [2024-07-12 11:07:46.916343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.056 [2024-07-12 11:07:46.916368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.056 [2024-07-12 11:07:46.916376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.056 [2024-07-12 11:07:46.916383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.056 [2024-07-12 11:07:46.916401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-12 11:07:46.926217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.056 [2024-07-12 11:07:46.926326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.056 [2024-07-12 11:07:46.926350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.056 [2024-07-12 11:07:46.926358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.056 [2024-07-12 11:07:46.926365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.056 [2024-07-12 11:07:46.926383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-12 11:07:46.936229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.056 [2024-07-12 11:07:46.936310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.056 [2024-07-12 11:07:46.936334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.056 [2024-07-12 11:07:46.936342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.056 [2024-07-12 11:07:46.936348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.056 [2024-07-12 11:07:46.936367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-12 11:07:46.946304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.056 [2024-07-12 11:07:46.946387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.056 [2024-07-12 11:07:46.946414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.056 [2024-07-12 11:07:46.946422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.056 [2024-07-12 11:07:46.946429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.056 [2024-07-12 11:07:46.946447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-12 11:07:46.956345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.056 [2024-07-12 11:07:46.956428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.056 [2024-07-12 11:07:46.956452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.056 [2024-07-12 11:07:46.956466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.056 [2024-07-12 11:07:46.956472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.056 [2024-07-12 11:07:46.956491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-12 11:07:46.966347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.056 [2024-07-12 11:07:46.966441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.056 [2024-07-12 11:07:46.966466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.056 [2024-07-12 11:07:46.966474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.056 [2024-07-12 11:07:46.966480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.056 [2024-07-12 11:07:46.966498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-12 11:07:46.976398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.056 [2024-07-12 11:07:46.976478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.056 [2024-07-12 11:07:46.976503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.056 [2024-07-12 11:07:46.976511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.056 [2024-07-12 11:07:46.976517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.056 [2024-07-12 11:07:46.976536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-12 11:07:46.986471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.056 [2024-07-12 11:07:46.986553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.056 [2024-07-12 11:07:46.986577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.056 [2024-07-12 11:07:46.986585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.056 [2024-07-12 11:07:46.986592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.056 [2024-07-12 11:07:46.986610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-12 11:07:46.996424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.056 [2024-07-12 11:07:46.996516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.057 [2024-07-12 11:07:46.996547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.057 [2024-07-12 11:07:46.996556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.057 [2024-07-12 11:07:46.996562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.057 [2024-07-12 11:07:46.996583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.057 qpair failed and we were unable to recover it. 00:29:30.057 [2024-07-12 11:07:47.006478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.057 [2024-07-12 11:07:47.006572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.057 [2024-07-12 11:07:47.006598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.057 [2024-07-12 11:07:47.006606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.057 [2024-07-12 11:07:47.006613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.057 [2024-07-12 11:07:47.006631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.057 qpair failed and we were unable to recover it. 00:29:30.057 [2024-07-12 11:07:47.016463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.057 [2024-07-12 11:07:47.016549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.057 [2024-07-12 11:07:47.016574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.057 [2024-07-12 11:07:47.016581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.057 [2024-07-12 11:07:47.016588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.057 [2024-07-12 11:07:47.016607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.057 qpair failed and we were unable to recover it. 00:29:30.057 [2024-07-12 11:07:47.026477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.057 [2024-07-12 11:07:47.026562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.057 [2024-07-12 11:07:47.026587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.057 [2024-07-12 11:07:47.026594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.057 [2024-07-12 11:07:47.026600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.057 [2024-07-12 11:07:47.026618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.057 qpair failed and we were unable to recover it. 00:29:30.057 [2024-07-12 11:07:47.036598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.057 [2024-07-12 11:07:47.036681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.057 [2024-07-12 11:07:47.036705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.057 [2024-07-12 11:07:47.036713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.057 [2024-07-12 11:07:47.036719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.057 [2024-07-12 11:07:47.036738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.057 qpair failed and we were unable to recover it. 00:29:30.320 [2024-07-12 11:07:47.046620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.320 [2024-07-12 11:07:47.046723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.320 [2024-07-12 11:07:47.046770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.320 [2024-07-12 11:07:47.046780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.320 [2024-07-12 11:07:47.046788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.320 [2024-07-12 11:07:47.046813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.320 [2024-07-12 11:07:47.056618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.320 [2024-07-12 11:07:47.056713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.320 [2024-07-12 11:07:47.056751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.320 [2024-07-12 11:07:47.056761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.320 [2024-07-12 11:07:47.056768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.320 [2024-07-12 11:07:47.056793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.320 [2024-07-12 11:07:47.066643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.320 [2024-07-12 11:07:47.066729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.320 [2024-07-12 11:07:47.066758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.320 [2024-07-12 11:07:47.066767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.320 [2024-07-12 11:07:47.066773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.320 [2024-07-12 11:07:47.066792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.320 [2024-07-12 11:07:47.076665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.320 [2024-07-12 11:07:47.076754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.320 [2024-07-12 11:07:47.076792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.320 [2024-07-12 11:07:47.076802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.320 [2024-07-12 11:07:47.076809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.320 [2024-07-12 11:07:47.076834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.320 [2024-07-12 11:07:47.086714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.320 [2024-07-12 11:07:47.086823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.320 [2024-07-12 11:07:47.086862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.320 [2024-07-12 11:07:47.086872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.320 [2024-07-12 11:07:47.086879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.320 [2024-07-12 11:07:47.086910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.320 [2024-07-12 11:07:47.096714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.320 [2024-07-12 11:07:47.096827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.320 [2024-07-12 11:07:47.096866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.321 [2024-07-12 11:07:47.096876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.321 [2024-07-12 11:07:47.096883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.321 [2024-07-12 11:07:47.096908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-07-12 11:07:47.106765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.321 [2024-07-12 11:07:47.106936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.321 [2024-07-12 11:07:47.106964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.321 [2024-07-12 11:07:47.106973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.321 [2024-07-12 11:07:47.106980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.321 [2024-07-12 11:07:47.107000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-07-12 11:07:47.116809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.321 [2024-07-12 11:07:47.116892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.321 [2024-07-12 11:07:47.116918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.321 [2024-07-12 11:07:47.116926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.321 [2024-07-12 11:07:47.116932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.321 [2024-07-12 11:07:47.116952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-07-12 11:07:47.126794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.321 [2024-07-12 11:07:47.126891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.321 [2024-07-12 11:07:47.126917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.321 [2024-07-12 11:07:47.126925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.321 [2024-07-12 11:07:47.126932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.321 [2024-07-12 11:07:47.126951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-07-12 11:07:47.136797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.321 [2024-07-12 11:07:47.136883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.321 [2024-07-12 11:07:47.136919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.321 [2024-07-12 11:07:47.136928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.321 [2024-07-12 11:07:47.136934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.321 [2024-07-12 11:07:47.136954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-07-12 11:07:47.146869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.321 [2024-07-12 11:07:47.146976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.321 [2024-07-12 11:07:47.147016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.321 [2024-07-12 11:07:47.147026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.321 [2024-07-12 11:07:47.147033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.321 [2024-07-12 11:07:47.147058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-07-12 11:07:47.156918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.321 [2024-07-12 11:07:47.157001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.321 [2024-07-12 11:07:47.157027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.321 [2024-07-12 11:07:47.157035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.321 [2024-07-12 11:07:47.157042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.321 [2024-07-12 11:07:47.157062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-07-12 11:07:47.166930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.321 [2024-07-12 11:07:47.167021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.321 [2024-07-12 11:07:47.167047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.321 [2024-07-12 11:07:47.167054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.321 [2024-07-12 11:07:47.167061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.321 [2024-07-12 11:07:47.167079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-07-12 11:07:47.177065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.321 [2024-07-12 11:07:47.177158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.321 [2024-07-12 11:07:47.177183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.321 [2024-07-12 11:07:47.177191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.321 [2024-07-12 11:07:47.177206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.321 [2024-07-12 11:07:47.177226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-07-12 11:07:47.186986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.321 [2024-07-12 11:07:47.187150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.321 [2024-07-12 11:07:47.187175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.321 [2024-07-12 11:07:47.187183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.321 [2024-07-12 11:07:47.187190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.321 [2024-07-12 11:07:47.187208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-07-12 11:07:47.197027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.321 [2024-07-12 11:07:47.197114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.321 [2024-07-12 11:07:47.197147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.321 [2024-07-12 11:07:47.197155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.321 [2024-07-12 11:07:47.197161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.321 [2024-07-12 11:07:47.197180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-07-12 11:07:47.206972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.321 [2024-07-12 11:07:47.207069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.321 [2024-07-12 11:07:47.207094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.321 [2024-07-12 11:07:47.207102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.321 [2024-07-12 11:07:47.207109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.321 [2024-07-12 11:07:47.207138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-07-12 11:07:47.217090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.321 [2024-07-12 11:07:47.217186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.321 [2024-07-12 11:07:47.217210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.321 [2024-07-12 11:07:47.217218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.321 [2024-07-12 11:07:47.217225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.321 [2024-07-12 11:07:47.217245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-07-12 11:07:47.227196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.321 [2024-07-12 11:07:47.227288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.321 [2024-07-12 11:07:47.227313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.321 [2024-07-12 11:07:47.227321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.321 [2024-07-12 11:07:47.227327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.321 [2024-07-12 11:07:47.227346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-07-12 11:07:47.237139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.321 [2024-07-12 11:07:47.237298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.321 [2024-07-12 11:07:47.237324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.321 [2024-07-12 11:07:47.237333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.321 [2024-07-12 11:07:47.237340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.322 [2024-07-12 11:07:47.237359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-07-12 11:07:47.247207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.322 [2024-07-12 11:07:47.247318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.322 [2024-07-12 11:07:47.247344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.322 [2024-07-12 11:07:47.247353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.322 [2024-07-12 11:07:47.247359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.322 [2024-07-12 11:07:47.247378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-07-12 11:07:47.257204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.322 [2024-07-12 11:07:47.257280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.322 [2024-07-12 11:07:47.257305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.322 [2024-07-12 11:07:47.257313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.322 [2024-07-12 11:07:47.257320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.322 [2024-07-12 11:07:47.257338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-07-12 11:07:47.267206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.322 [2024-07-12 11:07:47.267280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.322 [2024-07-12 11:07:47.267305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.322 [2024-07-12 11:07:47.267312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.322 [2024-07-12 11:07:47.267325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.322 [2024-07-12 11:07:47.267343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-07-12 11:07:47.277249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.322 [2024-07-12 11:07:47.277326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.322 [2024-07-12 11:07:47.277350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.322 [2024-07-12 11:07:47.277358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.322 [2024-07-12 11:07:47.277364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.322 [2024-07-12 11:07:47.277382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-07-12 11:07:47.287319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.322 [2024-07-12 11:07:47.287407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.322 [2024-07-12 11:07:47.287430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.322 [2024-07-12 11:07:47.287438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.322 [2024-07-12 11:07:47.287444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.322 [2024-07-12 11:07:47.287462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-07-12 11:07:47.297314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.322 [2024-07-12 11:07:47.297415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.322 [2024-07-12 11:07:47.297437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.322 [2024-07-12 11:07:47.297445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.322 [2024-07-12 11:07:47.297451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.322 [2024-07-12 11:07:47.297468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.584 [2024-07-12 11:07:47.307299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.584 [2024-07-12 11:07:47.307420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.584 [2024-07-12 11:07:47.307442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.584 [2024-07-12 11:07:47.307450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.584 [2024-07-12 11:07:47.307457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.584 [2024-07-12 11:07:47.307474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.584 qpair failed and we were unable to recover it. 00:29:30.584 [2024-07-12 11:07:47.317390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.584 [2024-07-12 11:07:47.317471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.584 [2024-07-12 11:07:47.317492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.584 [2024-07-12 11:07:47.317500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.584 [2024-07-12 11:07:47.317506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.584 [2024-07-12 11:07:47.317522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.584 qpair failed and we were unable to recover it. 00:29:30.584 [2024-07-12 11:07:47.327410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.584 [2024-07-12 11:07:47.327491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.584 [2024-07-12 11:07:47.327511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.584 [2024-07-12 11:07:47.327518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.584 [2024-07-12 11:07:47.327524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.584 [2024-07-12 11:07:47.327540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.584 qpair failed and we were unable to recover it. 00:29:30.584 [2024-07-12 11:07:47.337437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.584 [2024-07-12 11:07:47.337514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.584 [2024-07-12 11:07:47.337534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.584 [2024-07-12 11:07:47.337541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.584 [2024-07-12 11:07:47.337548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.584 [2024-07-12 11:07:47.337563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.584 qpair failed and we were unable to recover it. 00:29:30.584 [2024-07-12 11:07:47.347416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.584 [2024-07-12 11:07:47.347485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.584 [2024-07-12 11:07:47.347505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.584 [2024-07-12 11:07:47.347513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.584 [2024-07-12 11:07:47.347519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.584 [2024-07-12 11:07:47.347536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.584 qpair failed and we were unable to recover it. 00:29:30.584 [2024-07-12 11:07:47.357501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.585 [2024-07-12 11:07:47.357572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.585 [2024-07-12 11:07:47.357591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.585 [2024-07-12 11:07:47.357603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.585 [2024-07-12 11:07:47.357609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.585 [2024-07-12 11:07:47.357625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.585 qpair failed and we were unable to recover it. 00:29:30.585 [2024-07-12 11:07:47.367416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.585 [2024-07-12 11:07:47.367493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.585 [2024-07-12 11:07:47.367512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.585 [2024-07-12 11:07:47.367520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.585 [2024-07-12 11:07:47.367526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.585 [2024-07-12 11:07:47.367541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.585 qpair failed and we were unable to recover it. 00:29:30.585 [2024-07-12 11:07:47.377509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.585 [2024-07-12 11:07:47.377578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.585 [2024-07-12 11:07:47.377597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.585 [2024-07-12 11:07:47.377604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.585 [2024-07-12 11:07:47.377610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.585 [2024-07-12 11:07:47.377625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.585 qpair failed and we were unable to recover it. 00:29:30.585 [2024-07-12 11:07:47.387511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.585 [2024-07-12 11:07:47.387587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.585 [2024-07-12 11:07:47.387605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.585 [2024-07-12 11:07:47.387612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.585 [2024-07-12 11:07:47.387618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.585 [2024-07-12 11:07:47.387634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.585 qpair failed and we were unable to recover it. 00:29:30.585 [2024-07-12 11:07:47.397618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.585 [2024-07-12 11:07:47.397732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.585 [2024-07-12 11:07:47.397751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.585 [2024-07-12 11:07:47.397758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.585 [2024-07-12 11:07:47.397765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.585 [2024-07-12 11:07:47.397781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.585 qpair failed and we were unable to recover it. 00:29:30.585 [2024-07-12 11:07:47.407567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.585 [2024-07-12 11:07:47.407639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.585 [2024-07-12 11:07:47.407657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.585 [2024-07-12 11:07:47.407664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.585 [2024-07-12 11:07:47.407670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.585 [2024-07-12 11:07:47.407685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.585 qpair failed and we were unable to recover it. 00:29:30.585 [2024-07-12 11:07:47.417605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.585 [2024-07-12 11:07:47.417677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.585 [2024-07-12 11:07:47.417694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.585 [2024-07-12 11:07:47.417701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.585 [2024-07-12 11:07:47.417707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.585 [2024-07-12 11:07:47.417722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.585 qpair failed and we were unable to recover it. 00:29:30.585 [2024-07-12 11:07:47.427714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.585 [2024-07-12 11:07:47.427781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.585 [2024-07-12 11:07:47.427798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.585 [2024-07-12 11:07:47.427805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.585 [2024-07-12 11:07:47.427811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.585 [2024-07-12 11:07:47.427826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.585 qpair failed and we were unable to recover it. 00:29:30.585 [2024-07-12 11:07:47.437672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.585 [2024-07-12 11:07:47.437756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.585 [2024-07-12 11:07:47.437784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.585 [2024-07-12 11:07:47.437793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.585 [2024-07-12 11:07:47.437799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.585 [2024-07-12 11:07:47.437818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.585 qpair failed and we were unable to recover it. 00:29:30.585 [2024-07-12 11:07:47.447694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.585 [2024-07-12 11:07:47.447788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.585 [2024-07-12 11:07:47.447811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.585 [2024-07-12 11:07:47.447819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.585 [2024-07-12 11:07:47.447825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.585 [2024-07-12 11:07:47.447842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.585 qpair failed and we were unable to recover it. 00:29:30.585 [2024-07-12 11:07:47.457715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.585 [2024-07-12 11:07:47.457792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.585 [2024-07-12 11:07:47.457819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.585 [2024-07-12 11:07:47.457828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.585 [2024-07-12 11:07:47.457834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.585 [2024-07-12 11:07:47.457853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.585 qpair failed and we were unable to recover it. 00:29:30.585 [2024-07-12 11:07:47.467730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.585 [2024-07-12 11:07:47.467803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.585 [2024-07-12 11:07:47.467829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.585 [2024-07-12 11:07:47.467838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.585 [2024-07-12 11:07:47.467844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.585 [2024-07-12 11:07:47.467864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.585 qpair failed and we were unable to recover it. 00:29:30.585 [2024-07-12 11:07:47.477787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.585 [2024-07-12 11:07:47.477870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.585 [2024-07-12 11:07:47.477896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.585 [2024-07-12 11:07:47.477905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.585 [2024-07-12 11:07:47.477912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.585 [2024-07-12 11:07:47.477931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.585 qpair failed and we were unable to recover it. 00:29:30.585 [2024-07-12 11:07:47.487789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.586 [2024-07-12 11:07:47.487867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.586 [2024-07-12 11:07:47.487892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.586 [2024-07-12 11:07:47.487901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.586 [2024-07-12 11:07:47.487908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.586 [2024-07-12 11:07:47.487931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.586 qpair failed and we were unable to recover it. 00:29:30.586 [2024-07-12 11:07:47.497794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.586 [2024-07-12 11:07:47.497862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.586 [2024-07-12 11:07:47.497880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.586 [2024-07-12 11:07:47.497888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.586 [2024-07-12 11:07:47.497894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.586 [2024-07-12 11:07:47.497910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.586 qpair failed and we were unable to recover it. 00:29:30.586 [2024-07-12 11:07:47.507810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.586 [2024-07-12 11:07:47.507882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.586 [2024-07-12 11:07:47.507899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.586 [2024-07-12 11:07:47.507906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.586 [2024-07-12 11:07:47.507912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.586 [2024-07-12 11:07:47.507927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.586 qpair failed and we were unable to recover it. 00:29:30.586 [2024-07-12 11:07:47.517967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.586 [2024-07-12 11:07:47.518075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.586 [2024-07-12 11:07:47.518092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.586 [2024-07-12 11:07:47.518099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.586 [2024-07-12 11:07:47.518106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.586 [2024-07-12 11:07:47.518120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.586 qpair failed and we were unable to recover it. 00:29:30.586 [2024-07-12 11:07:47.527757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.586 [2024-07-12 11:07:47.527830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.586 [2024-07-12 11:07:47.527845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.586 [2024-07-12 11:07:47.527852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.586 [2024-07-12 11:07:47.527859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.586 [2024-07-12 11:07:47.527874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.586 qpair failed and we were unable to recover it. 00:29:30.586 [2024-07-12 11:07:47.537884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.586 [2024-07-12 11:07:47.537963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.586 [2024-07-12 11:07:47.537982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.586 [2024-07-12 11:07:47.537989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.586 [2024-07-12 11:07:47.537995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.586 [2024-07-12 11:07:47.538009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.586 qpair failed and we were unable to recover it. 00:29:30.586 [2024-07-12 11:07:47.547922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.586 [2024-07-12 11:07:47.547991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.586 [2024-07-12 11:07:47.548006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.586 [2024-07-12 11:07:47.548013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.586 [2024-07-12 11:07:47.548019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.586 [2024-07-12 11:07:47.548034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.586 qpair failed and we were unable to recover it. 00:29:30.586 [2024-07-12 11:07:47.558019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.586 [2024-07-12 11:07:47.558089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.586 [2024-07-12 11:07:47.558104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.586 [2024-07-12 11:07:47.558112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.586 [2024-07-12 11:07:47.558118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.586 [2024-07-12 11:07:47.558139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.586 qpair failed and we were unable to recover it. 00:29:30.849 [2024-07-12 11:07:47.568012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.849 [2024-07-12 11:07:47.568088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.849 [2024-07-12 11:07:47.568103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.849 [2024-07-12 11:07:47.568110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.849 [2024-07-12 11:07:47.568117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.849 [2024-07-12 11:07:47.568136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.849 qpair failed and we were unable to recover it. 00:29:30.849 [2024-07-12 11:07:47.577999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.849 [2024-07-12 11:07:47.578093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.849 [2024-07-12 11:07:47.578108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.849 [2024-07-12 11:07:47.578115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.849 [2024-07-12 11:07:47.578129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.849 [2024-07-12 11:07:47.578145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.849 qpair failed and we were unable to recover it. 00:29:30.849 [2024-07-12 11:07:47.588034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.849 [2024-07-12 11:07:47.588101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.849 [2024-07-12 11:07:47.588116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.849 [2024-07-12 11:07:47.588129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.849 [2024-07-12 11:07:47.588135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.849 [2024-07-12 11:07:47.588150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.849 qpair failed and we were unable to recover it. 00:29:30.849 [2024-07-12 11:07:47.598100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.849 [2024-07-12 11:07:47.598175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.849 [2024-07-12 11:07:47.598191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.849 [2024-07-12 11:07:47.598198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.849 [2024-07-12 11:07:47.598205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.849 [2024-07-12 11:07:47.598220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.849 qpair failed and we were unable to recover it. 00:29:30.849 [2024-07-12 11:07:47.608084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.849 [2024-07-12 11:07:47.608168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.849 [2024-07-12 11:07:47.608184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.849 [2024-07-12 11:07:47.608191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.849 [2024-07-12 11:07:47.608197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.849 [2024-07-12 11:07:47.608212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.849 qpair failed and we were unable to recover it. 00:29:30.849 [2024-07-12 11:07:47.618111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.849 [2024-07-12 11:07:47.618178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.849 [2024-07-12 11:07:47.618194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.849 [2024-07-12 11:07:47.618201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.849 [2024-07-12 11:07:47.618207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.849 [2024-07-12 11:07:47.618221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.849 qpair failed and we were unable to recover it. 00:29:30.849 [2024-07-12 11:07:47.628133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.849 [2024-07-12 11:07:47.628205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.849 [2024-07-12 11:07:47.628221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.849 [2024-07-12 11:07:47.628228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.849 [2024-07-12 11:07:47.628234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.849 [2024-07-12 11:07:47.628249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.849 qpair failed and we were unable to recover it. 00:29:30.849 [2024-07-12 11:07:47.638204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.849 [2024-07-12 11:07:47.638278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.849 [2024-07-12 11:07:47.638293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.849 [2024-07-12 11:07:47.638301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.849 [2024-07-12 11:07:47.638307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.849 [2024-07-12 11:07:47.638321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.849 qpair failed and we were unable to recover it. 00:29:30.849 [2024-07-12 11:07:47.648193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.849 [2024-07-12 11:07:47.648337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.849 [2024-07-12 11:07:47.648352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.849 [2024-07-12 11:07:47.648360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.849 [2024-07-12 11:07:47.648366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.849 [2024-07-12 11:07:47.648380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.849 qpair failed and we were unable to recover it. 00:29:30.849 [2024-07-12 11:07:47.658239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.849 [2024-07-12 11:07:47.658299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.849 [2024-07-12 11:07:47.658314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.849 [2024-07-12 11:07:47.658322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.849 [2024-07-12 11:07:47.658328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.849 [2024-07-12 11:07:47.658342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.849 qpair failed and we were unable to recover it. 00:29:30.849 [2024-07-12 11:07:47.668285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.849 [2024-07-12 11:07:47.668365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.849 [2024-07-12 11:07:47.668380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.849 [2024-07-12 11:07:47.668387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.849 [2024-07-12 11:07:47.668397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.849 [2024-07-12 11:07:47.668412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.849 qpair failed and we were unable to recover it. 00:29:30.849 [2024-07-12 11:07:47.678263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.849 [2024-07-12 11:07:47.678325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.849 [2024-07-12 11:07:47.678340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.849 [2024-07-12 11:07:47.678347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.849 [2024-07-12 11:07:47.678353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.849 [2024-07-12 11:07:47.678367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.849 qpair failed and we were unable to recover it. 00:29:30.849 [2024-07-12 11:07:47.688289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.850 [2024-07-12 11:07:47.688402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.850 [2024-07-12 11:07:47.688416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.850 [2024-07-12 11:07:47.688424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.850 [2024-07-12 11:07:47.688429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.850 [2024-07-12 11:07:47.688443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.850 qpair failed and we were unable to recover it. 00:29:30.850 [2024-07-12 11:07:47.698215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.850 [2024-07-12 11:07:47.698283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.850 [2024-07-12 11:07:47.698298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.850 [2024-07-12 11:07:47.698305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.850 [2024-07-12 11:07:47.698311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.850 [2024-07-12 11:07:47.698325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.850 qpair failed and we were unable to recover it. 00:29:30.850 [2024-07-12 11:07:47.708368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.850 [2024-07-12 11:07:47.708542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.850 [2024-07-12 11:07:47.708558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.850 [2024-07-12 11:07:47.708566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.850 [2024-07-12 11:07:47.708572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.850 [2024-07-12 11:07:47.708585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.850 qpair failed and we were unable to recover it. 00:29:30.850 [2024-07-12 11:07:47.718491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.850 [2024-07-12 11:07:47.718557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.850 [2024-07-12 11:07:47.718573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.850 [2024-07-12 11:07:47.718580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.850 [2024-07-12 11:07:47.718586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.850 [2024-07-12 11:07:47.718601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.850 qpair failed and we were unable to recover it. 00:29:30.850 [2024-07-12 11:07:47.728387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.850 [2024-07-12 11:07:47.728457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.850 [2024-07-12 11:07:47.728472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.850 [2024-07-12 11:07:47.728479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.850 [2024-07-12 11:07:47.728485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.850 [2024-07-12 11:07:47.728499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.850 qpair failed and we were unable to recover it. 00:29:30.850 [2024-07-12 11:07:47.738407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.850 [2024-07-12 11:07:47.738468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.850 [2024-07-12 11:07:47.738483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.850 [2024-07-12 11:07:47.738490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.850 [2024-07-12 11:07:47.738497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.850 [2024-07-12 11:07:47.738510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.850 qpair failed and we were unable to recover it. 00:29:30.850 [2024-07-12 11:07:47.748461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.850 [2024-07-12 11:07:47.748534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.850 [2024-07-12 11:07:47.748549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.850 [2024-07-12 11:07:47.748556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.850 [2024-07-12 11:07:47.748562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.850 [2024-07-12 11:07:47.748577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.850 qpair failed and we were unable to recover it. 00:29:30.850 [2024-07-12 11:07:47.758487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.850 [2024-07-12 11:07:47.758549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.850 [2024-07-12 11:07:47.758564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.850 [2024-07-12 11:07:47.758575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.850 [2024-07-12 11:07:47.758581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.850 [2024-07-12 11:07:47.758595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.850 qpair failed and we were unable to recover it. 00:29:30.850 [2024-07-12 11:07:47.768474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.850 [2024-07-12 11:07:47.768543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.850 [2024-07-12 11:07:47.768559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.850 [2024-07-12 11:07:47.768566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.850 [2024-07-12 11:07:47.768571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.850 [2024-07-12 11:07:47.768586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.850 qpair failed and we were unable to recover it. 00:29:30.850 [2024-07-12 11:07:47.778501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.850 [2024-07-12 11:07:47.778632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.850 [2024-07-12 11:07:47.778648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.850 [2024-07-12 11:07:47.778655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.850 [2024-07-12 11:07:47.778661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.850 [2024-07-12 11:07:47.778675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.850 qpair failed and we were unable to recover it. 00:29:30.850 [2024-07-12 11:07:47.788562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.850 [2024-07-12 11:07:47.788660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.850 [2024-07-12 11:07:47.788674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.850 [2024-07-12 11:07:47.788682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.850 [2024-07-12 11:07:47.788688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.850 [2024-07-12 11:07:47.788702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.850 qpair failed and we were unable to recover it. 00:29:30.850 [2024-07-12 11:07:47.798484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.850 [2024-07-12 11:07:47.798555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.850 [2024-07-12 11:07:47.798570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.850 [2024-07-12 11:07:47.798577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.850 [2024-07-12 11:07:47.798583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.850 [2024-07-12 11:07:47.798597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.850 qpair failed and we were unable to recover it. 00:29:30.850 [2024-07-12 11:07:47.808655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.850 [2024-07-12 11:07:47.808719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.850 [2024-07-12 11:07:47.808735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.850 [2024-07-12 11:07:47.808742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.850 [2024-07-12 11:07:47.808748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.850 [2024-07-12 11:07:47.808762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.850 qpair failed and we were unable to recover it. 00:29:30.850 [2024-07-12 11:07:47.818648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.850 [2024-07-12 11:07:47.818713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.850 [2024-07-12 11:07:47.818728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.850 [2024-07-12 11:07:47.818735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.850 [2024-07-12 11:07:47.818741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.850 [2024-07-12 11:07:47.818755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.850 qpair failed and we were unable to recover it. 00:29:30.850 [2024-07-12 11:07:47.828689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.850 [2024-07-12 11:07:47.828753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.850 [2024-07-12 11:07:47.828771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.851 [2024-07-12 11:07:47.828778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.851 [2024-07-12 11:07:47.828784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:30.851 [2024-07-12 11:07:47.828799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.851 qpair failed and we were unable to recover it. 00:29:31.113 [2024-07-12 11:07:47.838603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.113 [2024-07-12 11:07:47.838668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.113 [2024-07-12 11:07:47.838685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.113 [2024-07-12 11:07:47.838693] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.113 [2024-07-12 11:07:47.838699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.113 [2024-07-12 11:07:47.838715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.113 qpair failed and we were unable to recover it. 00:29:31.113 [2024-07-12 11:07:47.848734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.113 [2024-07-12 11:07:47.848808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.113 [2024-07-12 11:07:47.848828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.113 [2024-07-12 11:07:47.848836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.113 [2024-07-12 11:07:47.848842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.113 [2024-07-12 11:07:47.848858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.113 qpair failed and we were unable to recover it. 00:29:31.113 [2024-07-12 11:07:47.858759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.113 [2024-07-12 11:07:47.858868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.113 [2024-07-12 11:07:47.858893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.113 [2024-07-12 11:07:47.858901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.113 [2024-07-12 11:07:47.858908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.113 [2024-07-12 11:07:47.858927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.113 qpair failed and we were unable to recover it. 00:29:31.113 [2024-07-12 11:07:47.868784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.113 [2024-07-12 11:07:47.868866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.113 [2024-07-12 11:07:47.868891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.113 [2024-07-12 11:07:47.868899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.113 [2024-07-12 11:07:47.868906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.113 [2024-07-12 11:07:47.868925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.113 qpair failed and we were unable to recover it. 00:29:31.113 [2024-07-12 11:07:47.878816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.113 [2024-07-12 11:07:47.878887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.113 [2024-07-12 11:07:47.878913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.113 [2024-07-12 11:07:47.878922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.113 [2024-07-12 11:07:47.878929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.113 [2024-07-12 11:07:47.878948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.113 qpair failed and we were unable to recover it. 00:29:31.113 [2024-07-12 11:07:47.888890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.113 [2024-07-12 11:07:47.888974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.113 [2024-07-12 11:07:47.888998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.113 [2024-07-12 11:07:47.889007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.113 [2024-07-12 11:07:47.889014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.113 [2024-07-12 11:07:47.889042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.113 qpair failed and we were unable to recover it. 00:29:31.113 [2024-07-12 11:07:47.898878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.113 [2024-07-12 11:07:47.898942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.113 [2024-07-12 11:07:47.898959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.113 [2024-07-12 11:07:47.898966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.113 [2024-07-12 11:07:47.898973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.113 [2024-07-12 11:07:47.898988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.113 qpair failed and we were unable to recover it. 00:29:31.113 [2024-07-12 11:07:47.908828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.113 [2024-07-12 11:07:47.908888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.113 [2024-07-12 11:07:47.908904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.113 [2024-07-12 11:07:47.908911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.113 [2024-07-12 11:07:47.908917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.113 [2024-07-12 11:07:47.908932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.113 qpair failed and we were unable to recover it. 00:29:31.113 [2024-07-12 11:07:47.918940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.113 [2024-07-12 11:07:47.919006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.113 [2024-07-12 11:07:47.919022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.113 [2024-07-12 11:07:47.919029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.113 [2024-07-12 11:07:47.919035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.113 [2024-07-12 11:07:47.919049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.113 qpair failed and we were unable to recover it. 00:29:31.113 [2024-07-12 11:07:47.928969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.113 [2024-07-12 11:07:47.929037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.113 [2024-07-12 11:07:47.929052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.113 [2024-07-12 11:07:47.929058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.113 [2024-07-12 11:07:47.929065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.113 [2024-07-12 11:07:47.929079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.113 qpair failed and we were unable to recover it. 00:29:31.113 [2024-07-12 11:07:47.939044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.113 [2024-07-12 11:07:47.939105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.113 [2024-07-12 11:07:47.939130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.113 [2024-07-12 11:07:47.939138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.113 [2024-07-12 11:07:47.939144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.113 [2024-07-12 11:07:47.939158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.114 qpair failed and we were unable to recover it. 00:29:31.114 [2024-07-12 11:07:47.949012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.114 [2024-07-12 11:07:47.949075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.114 [2024-07-12 11:07:47.949091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.114 [2024-07-12 11:07:47.949099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.114 [2024-07-12 11:07:47.949105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.114 [2024-07-12 11:07:47.949119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.114 qpair failed and we were unable to recover it. 00:29:31.114 [2024-07-12 11:07:47.958983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.114 [2024-07-12 11:07:47.959094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.114 [2024-07-12 11:07:47.959110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.114 [2024-07-12 11:07:47.959117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.114 [2024-07-12 11:07:47.959129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.114 [2024-07-12 11:07:47.959145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.114 qpair failed and we were unable to recover it. 00:29:31.114 [2024-07-12 11:07:47.969069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.114 [2024-07-12 11:07:47.969145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.114 [2024-07-12 11:07:47.969161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.114 [2024-07-12 11:07:47.969168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.114 [2024-07-12 11:07:47.969174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.114 [2024-07-12 11:07:47.969189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.114 qpair failed and we were unable to recover it. 00:29:31.114 [2024-07-12 11:07:47.979091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.114 [2024-07-12 11:07:47.979160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.114 [2024-07-12 11:07:47.979176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.114 [2024-07-12 11:07:47.979183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.114 [2024-07-12 11:07:47.979189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.114 [2024-07-12 11:07:47.979207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.114 qpair failed and we were unable to recover it. 00:29:31.114 [2024-07-12 11:07:47.989112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.114 [2024-07-12 11:07:47.989178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.114 [2024-07-12 11:07:47.989193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.114 [2024-07-12 11:07:47.989200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.114 [2024-07-12 11:07:47.989206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.114 [2024-07-12 11:07:47.989220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.114 qpair failed and we were unable to recover it. 00:29:31.114 [2024-07-12 11:07:47.999215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.114 [2024-07-12 11:07:47.999291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.114 [2024-07-12 11:07:47.999306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.114 [2024-07-12 11:07:47.999313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.114 [2024-07-12 11:07:47.999320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.114 [2024-07-12 11:07:47.999335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.114 qpair failed and we were unable to recover it. 00:29:31.114 [2024-07-12 11:07:48.009169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.114 [2024-07-12 11:07:48.009279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.114 [2024-07-12 11:07:48.009294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.114 [2024-07-12 11:07:48.009301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.114 [2024-07-12 11:07:48.009307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.114 [2024-07-12 11:07:48.009321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.114 qpair failed and we were unable to recover it. 00:29:31.114 [2024-07-12 11:07:48.019219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.114 [2024-07-12 11:07:48.019284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.114 [2024-07-12 11:07:48.019299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.114 [2024-07-12 11:07:48.019306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.114 [2024-07-12 11:07:48.019312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.114 [2024-07-12 11:07:48.019327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.114 qpair failed and we were unable to recover it. 00:29:31.114 [2024-07-12 11:07:48.029211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.114 [2024-07-12 11:07:48.029345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.114 [2024-07-12 11:07:48.029360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.114 [2024-07-12 11:07:48.029367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.114 [2024-07-12 11:07:48.029373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.114 [2024-07-12 11:07:48.029388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.114 qpair failed and we were unable to recover it. 00:29:31.114 [2024-07-12 11:07:48.039301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.114 [2024-07-12 11:07:48.039364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.114 [2024-07-12 11:07:48.039379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.114 [2024-07-12 11:07:48.039387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.114 [2024-07-12 11:07:48.039393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.114 [2024-07-12 11:07:48.039407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.114 qpair failed and we were unable to recover it. 00:29:31.114 [2024-07-12 11:07:48.049317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.114 [2024-07-12 11:07:48.049385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.114 [2024-07-12 11:07:48.049400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.114 [2024-07-12 11:07:48.049408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.114 [2024-07-12 11:07:48.049414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.114 [2024-07-12 11:07:48.049428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.114 qpair failed and we were unable to recover it. 00:29:31.114 [2024-07-12 11:07:48.059296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.114 [2024-07-12 11:07:48.059363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.114 [2024-07-12 11:07:48.059378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.114 [2024-07-12 11:07:48.059385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.114 [2024-07-12 11:07:48.059391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.114 [2024-07-12 11:07:48.059405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.114 qpair failed and we were unable to recover it. 00:29:31.114 [2024-07-12 11:07:48.069402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.114 [2024-07-12 11:07:48.069466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.114 [2024-07-12 11:07:48.069482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.114 [2024-07-12 11:07:48.069489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.114 [2024-07-12 11:07:48.069500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.114 [2024-07-12 11:07:48.069515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.114 qpair failed and we were unable to recover it. 00:29:31.114 [2024-07-12 11:07:48.079363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.114 [2024-07-12 11:07:48.079426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.114 [2024-07-12 11:07:48.079442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.114 [2024-07-12 11:07:48.079449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.114 [2024-07-12 11:07:48.079455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.114 [2024-07-12 11:07:48.079470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.115 qpair failed and we were unable to recover it. 00:29:31.115 [2024-07-12 11:07:48.089380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.115 [2024-07-12 11:07:48.089490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.115 [2024-07-12 11:07:48.089505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.115 [2024-07-12 11:07:48.089512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.115 [2024-07-12 11:07:48.089518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.115 [2024-07-12 11:07:48.089533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.115 qpair failed and we were unable to recover it. 00:29:31.377 [2024-07-12 11:07:48.099401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.377 [2024-07-12 11:07:48.099466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.377 [2024-07-12 11:07:48.099481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.377 [2024-07-12 11:07:48.099489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.377 [2024-07-12 11:07:48.099495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.377 [2024-07-12 11:07:48.099509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.377 qpair failed and we were unable to recover it. 00:29:31.377 [2024-07-12 11:07:48.109408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.377 [2024-07-12 11:07:48.109470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.377 [2024-07-12 11:07:48.109486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.377 [2024-07-12 11:07:48.109493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.377 [2024-07-12 11:07:48.109498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.377 [2024-07-12 11:07:48.109513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.377 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-12 11:07:48.119477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.378 [2024-07-12 11:07:48.119569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.378 [2024-07-12 11:07:48.119584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.378 [2024-07-12 11:07:48.119592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.378 [2024-07-12 11:07:48.119598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.378 [2024-07-12 11:07:48.119612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-12 11:07:48.129478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.378 [2024-07-12 11:07:48.129552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.378 [2024-07-12 11:07:48.129567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.378 [2024-07-12 11:07:48.129574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.378 [2024-07-12 11:07:48.129579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.378 [2024-07-12 11:07:48.129594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-12 11:07:48.139646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.378 [2024-07-12 11:07:48.139708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.378 [2024-07-12 11:07:48.139724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.378 [2024-07-12 11:07:48.139731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.378 [2024-07-12 11:07:48.139737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.378 [2024-07-12 11:07:48.139752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-12 11:07:48.149541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.378 [2024-07-12 11:07:48.149603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.378 [2024-07-12 11:07:48.149618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.378 [2024-07-12 11:07:48.149625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.378 [2024-07-12 11:07:48.149631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.378 [2024-07-12 11:07:48.149646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-12 11:07:48.159577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.378 [2024-07-12 11:07:48.159640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.378 [2024-07-12 11:07:48.159655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.378 [2024-07-12 11:07:48.159666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.378 [2024-07-12 11:07:48.159672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.378 [2024-07-12 11:07:48.159686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-12 11:07:48.169650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.378 [2024-07-12 11:07:48.169743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.378 [2024-07-12 11:07:48.169760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.378 [2024-07-12 11:07:48.169768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.378 [2024-07-12 11:07:48.169774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.378 [2024-07-12 11:07:48.169789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-12 11:07:48.179637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.378 [2024-07-12 11:07:48.179698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.378 [2024-07-12 11:07:48.179714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.378 [2024-07-12 11:07:48.179721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.378 [2024-07-12 11:07:48.179727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.378 [2024-07-12 11:07:48.179741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-12 11:07:48.189668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.378 [2024-07-12 11:07:48.189734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.378 [2024-07-12 11:07:48.189750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.378 [2024-07-12 11:07:48.189757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.378 [2024-07-12 11:07:48.189763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.378 [2024-07-12 11:07:48.189777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-12 11:07:48.199683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.378 [2024-07-12 11:07:48.199786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.378 [2024-07-12 11:07:48.199802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.378 [2024-07-12 11:07:48.199809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.378 [2024-07-12 11:07:48.199815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.378 [2024-07-12 11:07:48.199829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-12 11:07:48.209738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.378 [2024-07-12 11:07:48.209812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.378 [2024-07-12 11:07:48.209828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.378 [2024-07-12 11:07:48.209835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.378 [2024-07-12 11:07:48.209841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.378 [2024-07-12 11:07:48.209856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-12 11:07:48.219754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.378 [2024-07-12 11:07:48.219819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.378 [2024-07-12 11:07:48.219835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.378 [2024-07-12 11:07:48.219842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.378 [2024-07-12 11:07:48.219848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.378 [2024-07-12 11:07:48.219862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-12 11:07:48.229777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.378 [2024-07-12 11:07:48.229842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.378 [2024-07-12 11:07:48.229857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.378 [2024-07-12 11:07:48.229864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.378 [2024-07-12 11:07:48.229870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.378 [2024-07-12 11:07:48.229884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-12 11:07:48.239862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.378 [2024-07-12 11:07:48.239934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.378 [2024-07-12 11:07:48.239950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.378 [2024-07-12 11:07:48.239957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.378 [2024-07-12 11:07:48.239963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.378 [2024-07-12 11:07:48.239977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-12 11:07:48.249827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.378 [2024-07-12 11:07:48.249907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.378 [2024-07-12 11:07:48.249923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.378 [2024-07-12 11:07:48.249934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.378 [2024-07-12 11:07:48.249940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.378 [2024-07-12 11:07:48.249954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-12 11:07:48.259844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.379 [2024-07-12 11:07:48.259906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.379 [2024-07-12 11:07:48.259922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.379 [2024-07-12 11:07:48.259929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.379 [2024-07-12 11:07:48.259935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.379 [2024-07-12 11:07:48.259949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.379 qpair failed and we were unable to recover it. 00:29:31.379 [2024-07-12 11:07:48.269947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.379 [2024-07-12 11:07:48.270007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.379 [2024-07-12 11:07:48.270022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.379 [2024-07-12 11:07:48.270029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.379 [2024-07-12 11:07:48.270035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.379 [2024-07-12 11:07:48.270049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.379 qpair failed and we were unable to recover it. 00:29:31.379 [2024-07-12 11:07:48.279805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.379 [2024-07-12 11:07:48.279879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.379 [2024-07-12 11:07:48.279894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.379 [2024-07-12 11:07:48.279901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.379 [2024-07-12 11:07:48.279908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.379 [2024-07-12 11:07:48.279922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.379 qpair failed and we were unable to recover it. 00:29:31.379 [2024-07-12 11:07:48.289929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.379 [2024-07-12 11:07:48.290035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.379 [2024-07-12 11:07:48.290051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.379 [2024-07-12 11:07:48.290058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.379 [2024-07-12 11:07:48.290064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.379 [2024-07-12 11:07:48.290078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.379 qpair failed and we were unable to recover it. 00:29:31.379 [2024-07-12 11:07:48.299882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.379 [2024-07-12 11:07:48.299950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.379 [2024-07-12 11:07:48.299965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.379 [2024-07-12 11:07:48.299972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.379 [2024-07-12 11:07:48.299978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.379 [2024-07-12 11:07:48.299992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.379 qpair failed and we were unable to recover it. 00:29:31.379 [2024-07-12 11:07:48.310002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.379 [2024-07-12 11:07:48.310068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.379 [2024-07-12 11:07:48.310083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.379 [2024-07-12 11:07:48.310090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.379 [2024-07-12 11:07:48.310096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.379 [2024-07-12 11:07:48.310110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.379 qpair failed and we were unable to recover it. 00:29:31.379 [2024-07-12 11:07:48.320013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.379 [2024-07-12 11:07:48.320130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.379 [2024-07-12 11:07:48.320147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.379 [2024-07-12 11:07:48.320154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.379 [2024-07-12 11:07:48.320160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.379 [2024-07-12 11:07:48.320176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.379 qpair failed and we were unable to recover it. 00:29:31.379 [2024-07-12 11:07:48.330096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.379 [2024-07-12 11:07:48.330168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.379 [2024-07-12 11:07:48.330184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.379 [2024-07-12 11:07:48.330191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.379 [2024-07-12 11:07:48.330197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.379 [2024-07-12 11:07:48.330212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.379 qpair failed and we were unable to recover it. 00:29:31.379 [2024-07-12 11:07:48.340059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.379 [2024-07-12 11:07:48.340127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.379 [2024-07-12 11:07:48.340146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.379 [2024-07-12 11:07:48.340153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.379 [2024-07-12 11:07:48.340159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.379 [2024-07-12 11:07:48.340174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.379 qpair failed and we were unable to recover it. 00:29:31.379 [2024-07-12 11:07:48.350097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.379 [2024-07-12 11:07:48.350163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.379 [2024-07-12 11:07:48.350179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.379 [2024-07-12 11:07:48.350186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.379 [2024-07-12 11:07:48.350192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.379 [2024-07-12 11:07:48.350207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.379 qpair failed and we were unable to recover it. 00:29:31.641 [2024-07-12 11:07:48.360119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.641 [2024-07-12 11:07:48.360194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.641 [2024-07-12 11:07:48.360210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.641 [2024-07-12 11:07:48.360217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.641 [2024-07-12 11:07:48.360223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.641 [2024-07-12 11:07:48.360238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.641 qpair failed and we were unable to recover it. 00:29:31.641 [2024-07-12 11:07:48.370145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.641 [2024-07-12 11:07:48.370212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.641 [2024-07-12 11:07:48.370228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.641 [2024-07-12 11:07:48.370235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.641 [2024-07-12 11:07:48.370241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.641 [2024-07-12 11:07:48.370256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.641 qpair failed and we were unable to recover it. 00:29:31.641 [2024-07-12 11:07:48.380152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.641 [2024-07-12 11:07:48.380212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.641 [2024-07-12 11:07:48.380227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.641 [2024-07-12 11:07:48.380234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.641 [2024-07-12 11:07:48.380240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.641 [2024-07-12 11:07:48.380259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.641 qpair failed and we were unable to recover it. 00:29:31.641 [2024-07-12 11:07:48.390191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.641 [2024-07-12 11:07:48.390254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.641 [2024-07-12 11:07:48.390269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.641 [2024-07-12 11:07:48.390277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.641 [2024-07-12 11:07:48.390283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.641 [2024-07-12 11:07:48.390297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.641 qpair failed and we were unable to recover it. 00:29:31.641 [2024-07-12 11:07:48.400247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.642 [2024-07-12 11:07:48.400314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.642 [2024-07-12 11:07:48.400329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.642 [2024-07-12 11:07:48.400336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.642 [2024-07-12 11:07:48.400342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.642 [2024-07-12 11:07:48.400356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.642 qpair failed and we were unable to recover it. 00:29:31.642 [2024-07-12 11:07:48.410270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.642 [2024-07-12 11:07:48.410335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.642 [2024-07-12 11:07:48.410351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.642 [2024-07-12 11:07:48.410358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.642 [2024-07-12 11:07:48.410364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.642 [2024-07-12 11:07:48.410378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.642 qpair failed and we were unable to recover it. 00:29:31.642 [2024-07-12 11:07:48.420304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.642 [2024-07-12 11:07:48.420365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.642 [2024-07-12 11:07:48.420380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.642 [2024-07-12 11:07:48.420387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.642 [2024-07-12 11:07:48.420393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.642 [2024-07-12 11:07:48.420407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.642 qpair failed and we were unable to recover it. 00:29:31.642 [2024-07-12 11:07:48.430312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.642 [2024-07-12 11:07:48.430376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.642 [2024-07-12 11:07:48.430395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.642 [2024-07-12 11:07:48.430402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.642 [2024-07-12 11:07:48.430408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.642 [2024-07-12 11:07:48.430423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.642 qpair failed and we were unable to recover it. 00:29:31.642 [2024-07-12 11:07:48.440330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.642 [2024-07-12 11:07:48.440396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.642 [2024-07-12 11:07:48.440412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.642 [2024-07-12 11:07:48.440419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.642 [2024-07-12 11:07:48.440425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.642 [2024-07-12 11:07:48.440439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.642 qpair failed and we were unable to recover it. 00:29:31.642 [2024-07-12 11:07:48.450402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.642 [2024-07-12 11:07:48.450472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.642 [2024-07-12 11:07:48.450487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.642 [2024-07-12 11:07:48.450494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.642 [2024-07-12 11:07:48.450500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.642 [2024-07-12 11:07:48.450515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.642 qpair failed and we were unable to recover it. 00:29:31.642 [2024-07-12 11:07:48.460279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.642 [2024-07-12 11:07:48.460355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.642 [2024-07-12 11:07:48.460370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.642 [2024-07-12 11:07:48.460376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.642 [2024-07-12 11:07:48.460382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.642 [2024-07-12 11:07:48.460397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.642 qpair failed and we were unable to recover it. 00:29:31.642 [2024-07-12 11:07:48.470411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.642 [2024-07-12 11:07:48.470475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.642 [2024-07-12 11:07:48.470490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.642 [2024-07-12 11:07:48.470497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.642 [2024-07-12 11:07:48.470507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.642 [2024-07-12 11:07:48.470521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.642 qpair failed and we were unable to recover it. 00:29:31.642 [2024-07-12 11:07:48.480451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.642 [2024-07-12 11:07:48.480515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.642 [2024-07-12 11:07:48.480529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.642 [2024-07-12 11:07:48.480536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.642 [2024-07-12 11:07:48.480542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.642 [2024-07-12 11:07:48.480556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.642 qpair failed and we were unable to recover it. 00:29:31.642 [2024-07-12 11:07:48.490511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.642 [2024-07-12 11:07:48.490580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.642 [2024-07-12 11:07:48.490595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.642 [2024-07-12 11:07:48.490602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.642 [2024-07-12 11:07:48.490608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.642 [2024-07-12 11:07:48.490622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.642 qpair failed and we were unable to recover it. 00:29:31.642 [2024-07-12 11:07:48.500496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.642 [2024-07-12 11:07:48.500562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.642 [2024-07-12 11:07:48.500577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.642 [2024-07-12 11:07:48.500584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.642 [2024-07-12 11:07:48.500590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.642 [2024-07-12 11:07:48.500604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.642 qpair failed and we were unable to recover it. 00:29:31.642 [2024-07-12 11:07:48.510526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.642 [2024-07-12 11:07:48.510590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.642 [2024-07-12 11:07:48.510605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.642 [2024-07-12 11:07:48.510611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.642 [2024-07-12 11:07:48.510618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.642 [2024-07-12 11:07:48.510632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.642 qpair failed and we were unable to recover it. 00:29:31.642 [2024-07-12 11:07:48.520593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.642 [2024-07-12 11:07:48.520669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.642 [2024-07-12 11:07:48.520685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.642 [2024-07-12 11:07:48.520692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.642 [2024-07-12 11:07:48.520698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.642 [2024-07-12 11:07:48.520711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.642 qpair failed and we were unable to recover it. 00:29:31.642 [2024-07-12 11:07:48.530585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.642 [2024-07-12 11:07:48.530652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.642 [2024-07-12 11:07:48.530667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.642 [2024-07-12 11:07:48.530674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.642 [2024-07-12 11:07:48.530680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.642 [2024-07-12 11:07:48.530694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.642 qpair failed and we were unable to recover it. 00:29:31.642 [2024-07-12 11:07:48.540624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.642 [2024-07-12 11:07:48.540686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.642 [2024-07-12 11:07:48.540701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.643 [2024-07-12 11:07:48.540708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.643 [2024-07-12 11:07:48.540714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.643 [2024-07-12 11:07:48.540728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.643 qpair failed and we were unable to recover it. 00:29:31.643 [2024-07-12 11:07:48.550657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.643 [2024-07-12 11:07:48.550721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.643 [2024-07-12 11:07:48.550736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.643 [2024-07-12 11:07:48.550743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.643 [2024-07-12 11:07:48.550749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.643 [2024-07-12 11:07:48.550763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.643 qpair failed and we were unable to recover it. 00:29:31.643 [2024-07-12 11:07:48.560669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.643 [2024-07-12 11:07:48.560747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.643 [2024-07-12 11:07:48.560762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.643 [2024-07-12 11:07:48.560772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.643 [2024-07-12 11:07:48.560778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.643 [2024-07-12 11:07:48.560793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.643 qpair failed and we were unable to recover it. 00:29:31.643 [2024-07-12 11:07:48.570582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.643 [2024-07-12 11:07:48.570655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.643 [2024-07-12 11:07:48.570669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.643 [2024-07-12 11:07:48.570676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.643 [2024-07-12 11:07:48.570683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.643 [2024-07-12 11:07:48.570697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.643 qpair failed and we were unable to recover it. 00:29:31.643 [2024-07-12 11:07:48.580615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.643 [2024-07-12 11:07:48.580681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.643 [2024-07-12 11:07:48.580695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.643 [2024-07-12 11:07:48.580702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.643 [2024-07-12 11:07:48.580709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.643 [2024-07-12 11:07:48.580723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.643 qpair failed and we were unable to recover it. 00:29:31.643 [2024-07-12 11:07:48.590783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.643 [2024-07-12 11:07:48.590850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.643 [2024-07-12 11:07:48.590865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.643 [2024-07-12 11:07:48.590872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.643 [2024-07-12 11:07:48.590878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.643 [2024-07-12 11:07:48.590892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.643 qpair failed and we were unable to recover it. 00:29:31.643 [2024-07-12 11:07:48.600776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.643 [2024-07-12 11:07:48.600855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.643 [2024-07-12 11:07:48.600880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.643 [2024-07-12 11:07:48.600889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.643 [2024-07-12 11:07:48.600896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.643 [2024-07-12 11:07:48.600915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.643 qpair failed and we were unable to recover it. 00:29:31.643 [2024-07-12 11:07:48.610815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.643 [2024-07-12 11:07:48.610927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.643 [2024-07-12 11:07:48.610952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.643 [2024-07-12 11:07:48.610961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.643 [2024-07-12 11:07:48.610968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.643 [2024-07-12 11:07:48.610988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.643 qpair failed and we were unable to recover it. 00:29:31.643 [2024-07-12 11:07:48.620822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.643 [2024-07-12 11:07:48.620901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.643 [2024-07-12 11:07:48.620919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.643 [2024-07-12 11:07:48.620927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.643 [2024-07-12 11:07:48.620933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.643 [2024-07-12 11:07:48.620950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.643 qpair failed and we were unable to recover it. 00:29:31.905 [2024-07-12 11:07:48.630859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.905 [2024-07-12 11:07:48.630928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.905 [2024-07-12 11:07:48.630945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.905 [2024-07-12 11:07:48.630953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.905 [2024-07-12 11:07:48.630959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.905 [2024-07-12 11:07:48.630974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.905 qpair failed and we were unable to recover it. 00:29:31.905 [2024-07-12 11:07:48.640880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.905 [2024-07-12 11:07:48.640944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.905 [2024-07-12 11:07:48.640960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.905 [2024-07-12 11:07:48.640967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.905 [2024-07-12 11:07:48.640973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.905 [2024-07-12 11:07:48.640988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.905 qpair failed and we were unable to recover it. 00:29:31.905 [2024-07-12 11:07:48.650913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.905 [2024-07-12 11:07:48.650978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.905 [2024-07-12 11:07:48.650995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.905 [2024-07-12 11:07:48.651006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.905 [2024-07-12 11:07:48.651012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.905 [2024-07-12 11:07:48.651027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.905 qpair failed and we were unable to recover it. 00:29:31.905 [2024-07-12 11:07:48.660958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.905 [2024-07-12 11:07:48.661024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.905 [2024-07-12 11:07:48.661040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.905 [2024-07-12 11:07:48.661047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.905 [2024-07-12 11:07:48.661053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.905 [2024-07-12 11:07:48.661067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.905 qpair failed and we were unable to recover it. 00:29:31.905 [2024-07-12 11:07:48.670993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.905 [2024-07-12 11:07:48.671059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.905 [2024-07-12 11:07:48.671075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.905 [2024-07-12 11:07:48.671082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.905 [2024-07-12 11:07:48.671088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.905 [2024-07-12 11:07:48.671102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.905 qpair failed and we were unable to recover it. 00:29:31.905 [2024-07-12 11:07:48.680994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.905 [2024-07-12 11:07:48.681058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.905 [2024-07-12 11:07:48.681074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.905 [2024-07-12 11:07:48.681081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.905 [2024-07-12 11:07:48.681087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.905 [2024-07-12 11:07:48.681101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.905 qpair failed and we were unable to recover it. 00:29:31.905 [2024-07-12 11:07:48.691029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.906 [2024-07-12 11:07:48.691100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.906 [2024-07-12 11:07:48.691115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.906 [2024-07-12 11:07:48.691127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.906 [2024-07-12 11:07:48.691134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.906 [2024-07-12 11:07:48.691148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.906 qpair failed and we were unable to recover it. 00:29:31.906 [2024-07-12 11:07:48.701118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.906 [2024-07-12 11:07:48.701196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.906 [2024-07-12 11:07:48.701211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.906 [2024-07-12 11:07:48.701218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.906 [2024-07-12 11:07:48.701224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.906 [2024-07-12 11:07:48.701239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.906 qpair failed and we were unable to recover it. 00:29:31.906 [2024-07-12 11:07:48.711080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.906 [2024-07-12 11:07:48.711149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.906 [2024-07-12 11:07:48.711165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.906 [2024-07-12 11:07:48.711172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.906 [2024-07-12 11:07:48.711177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.906 [2024-07-12 11:07:48.711192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.906 qpair failed and we were unable to recover it. 00:29:31.906 [2024-07-12 11:07:48.721018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.906 [2024-07-12 11:07:48.721080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.906 [2024-07-12 11:07:48.721097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.906 [2024-07-12 11:07:48.721104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.906 [2024-07-12 11:07:48.721110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.906 [2024-07-12 11:07:48.721131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.906 qpair failed and we were unable to recover it. 00:29:31.906 [2024-07-12 11:07:48.731151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.906 [2024-07-12 11:07:48.731225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.906 [2024-07-12 11:07:48.731241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.906 [2024-07-12 11:07:48.731248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.906 [2024-07-12 11:07:48.731255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.906 [2024-07-12 11:07:48.731269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.906 qpair failed and we were unable to recover it. 00:29:31.906 [2024-07-12 11:07:48.741165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.906 [2024-07-12 11:07:48.741230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.906 [2024-07-12 11:07:48.741249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.906 [2024-07-12 11:07:48.741256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.906 [2024-07-12 11:07:48.741262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.906 [2024-07-12 11:07:48.741277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.906 qpair failed and we were unable to recover it. 00:29:31.906 [2024-07-12 11:07:48.751192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.906 [2024-07-12 11:07:48.751254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.906 [2024-07-12 11:07:48.751270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.906 [2024-07-12 11:07:48.751277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.906 [2024-07-12 11:07:48.751283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.906 [2024-07-12 11:07:48.751298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.906 qpair failed and we were unable to recover it. 00:29:31.906 [2024-07-12 11:07:48.761226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.906 [2024-07-12 11:07:48.761293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.906 [2024-07-12 11:07:48.761308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.906 [2024-07-12 11:07:48.761315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.906 [2024-07-12 11:07:48.761321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.906 [2024-07-12 11:07:48.761336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.906 qpair failed and we were unable to recover it. 00:29:31.906 [2024-07-12 11:07:48.771258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.906 [2024-07-12 11:07:48.771376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.906 [2024-07-12 11:07:48.771391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.906 [2024-07-12 11:07:48.771398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.906 [2024-07-12 11:07:48.771405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.906 [2024-07-12 11:07:48.771419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.906 qpair failed and we were unable to recover it. 00:29:31.906 [2024-07-12 11:07:48.781346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.906 [2024-07-12 11:07:48.781417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.906 [2024-07-12 11:07:48.781432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.906 [2024-07-12 11:07:48.781439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.906 [2024-07-12 11:07:48.781445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.906 [2024-07-12 11:07:48.781463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.906 qpair failed and we were unable to recover it. 00:29:31.906 [2024-07-12 11:07:48.791300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.906 [2024-07-12 11:07:48.791364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.906 [2024-07-12 11:07:48.791379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.906 [2024-07-12 11:07:48.791386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.906 [2024-07-12 11:07:48.791392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.906 [2024-07-12 11:07:48.791406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.906 qpair failed and we were unable to recover it. 00:29:31.906 [2024-07-12 11:07:48.801386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.906 [2024-07-12 11:07:48.801468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.906 [2024-07-12 11:07:48.801483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.906 [2024-07-12 11:07:48.801490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.906 [2024-07-12 11:07:48.801496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.906 [2024-07-12 11:07:48.801511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.906 qpair failed and we were unable to recover it. 00:29:31.906 [2024-07-12 11:07:48.811371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.906 [2024-07-12 11:07:48.811438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.906 [2024-07-12 11:07:48.811453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.906 [2024-07-12 11:07:48.811460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.906 [2024-07-12 11:07:48.811466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.906 [2024-07-12 11:07:48.811480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.906 qpair failed and we were unable to recover it. 00:29:31.906 [2024-07-12 11:07:48.821276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.906 [2024-07-12 11:07:48.821345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.906 [2024-07-12 11:07:48.821360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.906 [2024-07-12 11:07:48.821368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.906 [2024-07-12 11:07:48.821373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.906 [2024-07-12 11:07:48.821388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.906 qpair failed and we were unable to recover it. 00:29:31.906 [2024-07-12 11:07:48.831513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.906 [2024-07-12 11:07:48.831576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.906 [2024-07-12 11:07:48.831595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.907 [2024-07-12 11:07:48.831602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.907 [2024-07-12 11:07:48.831608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.907 [2024-07-12 11:07:48.831623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.907 qpair failed and we were unable to recover it. 00:29:31.907 [2024-07-12 11:07:48.841440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.907 [2024-07-12 11:07:48.841505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.907 [2024-07-12 11:07:48.841520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.907 [2024-07-12 11:07:48.841527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.907 [2024-07-12 11:07:48.841533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.907 [2024-07-12 11:07:48.841547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.907 qpair failed and we were unable to recover it. 00:29:31.907 [2024-07-12 11:07:48.851488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.907 [2024-07-12 11:07:48.851556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.907 [2024-07-12 11:07:48.851572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.907 [2024-07-12 11:07:48.851579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.907 [2024-07-12 11:07:48.851585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.907 [2024-07-12 11:07:48.851599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.907 qpair failed and we were unable to recover it. 00:29:31.907 [2024-07-12 11:07:48.861486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.907 [2024-07-12 11:07:48.861558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.907 [2024-07-12 11:07:48.861573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.907 [2024-07-12 11:07:48.861580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.907 [2024-07-12 11:07:48.861586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.907 [2024-07-12 11:07:48.861601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.907 qpair failed and we were unable to recover it. 00:29:31.907 [2024-07-12 11:07:48.871509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.907 [2024-07-12 11:07:48.871574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.907 [2024-07-12 11:07:48.871589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.907 [2024-07-12 11:07:48.871596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.907 [2024-07-12 11:07:48.871606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.907 [2024-07-12 11:07:48.871620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.907 qpair failed and we were unable to recover it. 00:29:31.907 [2024-07-12 11:07:48.881548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.907 [2024-07-12 11:07:48.881615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.907 [2024-07-12 11:07:48.881631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.907 [2024-07-12 11:07:48.881638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.907 [2024-07-12 11:07:48.881644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:31.907 [2024-07-12 11:07:48.881658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.907 qpair failed and we were unable to recover it. 00:29:32.168 [2024-07-12 11:07:48.891562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.168 [2024-07-12 11:07:48.891641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.168 [2024-07-12 11:07:48.891657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.168 [2024-07-12 11:07:48.891664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.168 [2024-07-12 11:07:48.891670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.168 [2024-07-12 11:07:48.891685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.168 qpair failed and we were unable to recover it. 00:29:32.168 [2024-07-12 11:07:48.901592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.168 [2024-07-12 11:07:48.901659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.168 [2024-07-12 11:07:48.901675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.168 [2024-07-12 11:07:48.901682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.168 [2024-07-12 11:07:48.901688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.169 [2024-07-12 11:07:48.901703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.169 qpair failed and we were unable to recover it. 00:29:32.169 [2024-07-12 11:07:48.911673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.169 [2024-07-12 11:07:48.911754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.169 [2024-07-12 11:07:48.911770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.169 [2024-07-12 11:07:48.911777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.169 [2024-07-12 11:07:48.911783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.169 [2024-07-12 11:07:48.911797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.169 qpair failed and we were unable to recover it. 00:29:32.169 [2024-07-12 11:07:48.921676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.169 [2024-07-12 11:07:48.921743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.169 [2024-07-12 11:07:48.921760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.169 [2024-07-12 11:07:48.921767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.169 [2024-07-12 11:07:48.921773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.169 [2024-07-12 11:07:48.921789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.169 qpair failed and we were unable to recover it. 00:29:32.169 [2024-07-12 11:07:48.931698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.169 [2024-07-12 11:07:48.931784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.169 [2024-07-12 11:07:48.931799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.169 [2024-07-12 11:07:48.931806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.169 [2024-07-12 11:07:48.931812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.169 [2024-07-12 11:07:48.931827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.169 qpair failed and we were unable to recover it. 00:29:32.169 [2024-07-12 11:07:48.941656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.169 [2024-07-12 11:07:48.941725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.169 [2024-07-12 11:07:48.941742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.169 [2024-07-12 11:07:48.941749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.169 [2024-07-12 11:07:48.941756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.169 [2024-07-12 11:07:48.941770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.169 qpair failed and we were unable to recover it. 00:29:32.169 [2024-07-12 11:07:48.951770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.169 [2024-07-12 11:07:48.951847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.169 [2024-07-12 11:07:48.951863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.169 [2024-07-12 11:07:48.951870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.169 [2024-07-12 11:07:48.951876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.169 [2024-07-12 11:07:48.951891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.169 qpair failed and we were unable to recover it. 00:29:32.169 [2024-07-12 11:07:48.961643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.169 [2024-07-12 11:07:48.961706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.169 [2024-07-12 11:07:48.961721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.169 [2024-07-12 11:07:48.961728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.169 [2024-07-12 11:07:48.961738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.169 [2024-07-12 11:07:48.961753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.169 qpair failed and we were unable to recover it. 00:29:32.169 [2024-07-12 11:07:48.971791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.169 [2024-07-12 11:07:48.971860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.169 [2024-07-12 11:07:48.971875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.169 [2024-07-12 11:07:48.971883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.169 [2024-07-12 11:07:48.971889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.169 [2024-07-12 11:07:48.971903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.169 qpair failed and we were unable to recover it. 00:29:32.169 [2024-07-12 11:07:48.981812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.169 [2024-07-12 11:07:48.981875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.169 [2024-07-12 11:07:48.981890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.169 [2024-07-12 11:07:48.981898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.169 [2024-07-12 11:07:48.981904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.169 [2024-07-12 11:07:48.981918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.169 qpair failed and we were unable to recover it. 00:29:32.169 [2024-07-12 11:07:48.991719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.169 [2024-07-12 11:07:48.991789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.169 [2024-07-12 11:07:48.991804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.169 [2024-07-12 11:07:48.991811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.169 [2024-07-12 11:07:48.991817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.169 [2024-07-12 11:07:48.991832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.169 qpair failed and we were unable to recover it. 00:29:32.169 [2024-07-12 11:07:49.001849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.169 [2024-07-12 11:07:49.001919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.169 [2024-07-12 11:07:49.001935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.169 [2024-07-12 11:07:49.001942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.169 [2024-07-12 11:07:49.001948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.169 [2024-07-12 11:07:49.001962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.169 qpair failed and we were unable to recover it. 00:29:32.169 [2024-07-12 11:07:49.011892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.169 [2024-07-12 11:07:49.011959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.169 [2024-07-12 11:07:49.011975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.169 [2024-07-12 11:07:49.011982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.169 [2024-07-12 11:07:49.011988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.169 [2024-07-12 11:07:49.012002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.169 qpair failed and we were unable to recover it. 00:29:32.169 [2024-07-12 11:07:49.021813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.169 [2024-07-12 11:07:49.021875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.169 [2024-07-12 11:07:49.021890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.169 [2024-07-12 11:07:49.021897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.169 [2024-07-12 11:07:49.021903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.169 [2024-07-12 11:07:49.021918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.169 qpair failed and we were unable to recover it. 00:29:32.169 [2024-07-12 11:07:49.031929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.169 [2024-07-12 11:07:49.032011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.169 [2024-07-12 11:07:49.032026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.169 [2024-07-12 11:07:49.032033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.169 [2024-07-12 11:07:49.032039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.169 [2024-07-12 11:07:49.032054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.169 qpair failed and we were unable to recover it. 00:29:32.169 [2024-07-12 11:07:49.041986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.169 [2024-07-12 11:07:49.042048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.169 [2024-07-12 11:07:49.042064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.169 [2024-07-12 11:07:49.042072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.169 [2024-07-12 11:07:49.042078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.170 [2024-07-12 11:07:49.042092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.170 qpair failed and we were unable to recover it. 00:29:32.170 [2024-07-12 11:07:49.051990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.170 [2024-07-12 11:07:49.052058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.170 [2024-07-12 11:07:49.052074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.170 [2024-07-12 11:07:49.052084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.170 [2024-07-12 11:07:49.052090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.170 [2024-07-12 11:07:49.052105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.170 qpair failed and we were unable to recover it. 00:29:32.170 [2024-07-12 11:07:49.062014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.170 [2024-07-12 11:07:49.062090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.170 [2024-07-12 11:07:49.062105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.170 [2024-07-12 11:07:49.062112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.170 [2024-07-12 11:07:49.062118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.170 [2024-07-12 11:07:49.062138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.170 qpair failed and we were unable to recover it. 00:29:32.170 [2024-07-12 11:07:49.071943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.170 [2024-07-12 11:07:49.072006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.170 [2024-07-12 11:07:49.072021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.170 [2024-07-12 11:07:49.072029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.170 [2024-07-12 11:07:49.072035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.170 [2024-07-12 11:07:49.072050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.170 qpair failed and we were unable to recover it. 00:29:32.170 [2024-07-12 11:07:49.082066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.170 [2024-07-12 11:07:49.082142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.170 [2024-07-12 11:07:49.082158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.170 [2024-07-12 11:07:49.082165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.170 [2024-07-12 11:07:49.082171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.170 [2024-07-12 11:07:49.082186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.170 qpair failed and we were unable to recover it. 00:29:32.170 [2024-07-12 11:07:49.092102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.170 [2024-07-12 11:07:49.092174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.170 [2024-07-12 11:07:49.092189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.170 [2024-07-12 11:07:49.092196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.170 [2024-07-12 11:07:49.092202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.170 [2024-07-12 11:07:49.092217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.170 qpair failed and we were unable to recover it. 00:29:32.170 [2024-07-12 11:07:49.102118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.170 [2024-07-12 11:07:49.102186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.170 [2024-07-12 11:07:49.102201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.170 [2024-07-12 11:07:49.102208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.170 [2024-07-12 11:07:49.102214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.170 [2024-07-12 11:07:49.102229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.170 qpair failed and we were unable to recover it. 00:29:32.170 [2024-07-12 11:07:49.112131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.170 [2024-07-12 11:07:49.112192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.170 [2024-07-12 11:07:49.112207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.170 [2024-07-12 11:07:49.112214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.170 [2024-07-12 11:07:49.112220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.170 [2024-07-12 11:07:49.112235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.170 qpair failed and we were unable to recover it. 00:29:32.170 [2024-07-12 11:07:49.122160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.170 [2024-07-12 11:07:49.122234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.170 [2024-07-12 11:07:49.122249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.170 [2024-07-12 11:07:49.122256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.170 [2024-07-12 11:07:49.122262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.170 [2024-07-12 11:07:49.122276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.170 qpair failed and we were unable to recover it. 00:29:32.170 [2024-07-12 11:07:49.132204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.170 [2024-07-12 11:07:49.132276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.170 [2024-07-12 11:07:49.132291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.170 [2024-07-12 11:07:49.132298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.170 [2024-07-12 11:07:49.132304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.170 [2024-07-12 11:07:49.132318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.170 qpair failed and we were unable to recover it. 00:29:32.170 [2024-07-12 11:07:49.142156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.170 [2024-07-12 11:07:49.142228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.170 [2024-07-12 11:07:49.142249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.170 [2024-07-12 11:07:49.142256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.170 [2024-07-12 11:07:49.142262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.170 [2024-07-12 11:07:49.142280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.170 qpair failed and we were unable to recover it. 00:29:32.432 [2024-07-12 11:07:49.152256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.432 [2024-07-12 11:07:49.152323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.432 [2024-07-12 11:07:49.152339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.432 [2024-07-12 11:07:49.152346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.432 [2024-07-12 11:07:49.152352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.432 [2024-07-12 11:07:49.152367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.432 qpair failed and we were unable to recover it. 00:29:32.432 [2024-07-12 11:07:49.162205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.432 [2024-07-12 11:07:49.162305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.432 [2024-07-12 11:07:49.162321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.432 [2024-07-12 11:07:49.162328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.432 [2024-07-12 11:07:49.162334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.432 [2024-07-12 11:07:49.162348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.432 qpair failed and we were unable to recover it. 00:29:32.432 [2024-07-12 11:07:49.172326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.432 [2024-07-12 11:07:49.172395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.432 [2024-07-12 11:07:49.172411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.432 [2024-07-12 11:07:49.172418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.432 [2024-07-12 11:07:49.172424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.432 [2024-07-12 11:07:49.172439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.432 qpair failed and we were unable to recover it. 00:29:32.432 [2024-07-12 11:07:49.182324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.432 [2024-07-12 11:07:49.182382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.432 [2024-07-12 11:07:49.182398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.432 [2024-07-12 11:07:49.182405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.432 [2024-07-12 11:07:49.182411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.432 [2024-07-12 11:07:49.182428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.432 qpair failed and we were unable to recover it. 00:29:32.432 [2024-07-12 11:07:49.192358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.432 [2024-07-12 11:07:49.192422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.432 [2024-07-12 11:07:49.192438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.432 [2024-07-12 11:07:49.192445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.432 [2024-07-12 11:07:49.192451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.432 [2024-07-12 11:07:49.192465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.432 qpair failed and we were unable to recover it. 00:29:32.432 [2024-07-12 11:07:49.202350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.432 [2024-07-12 11:07:49.202414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.432 [2024-07-12 11:07:49.202430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.432 [2024-07-12 11:07:49.202437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.432 [2024-07-12 11:07:49.202443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.432 [2024-07-12 11:07:49.202458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.432 qpair failed and we were unable to recover it. 00:29:32.432 [2024-07-12 11:07:49.212398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.432 [2024-07-12 11:07:49.212467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.432 [2024-07-12 11:07:49.212483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.432 [2024-07-12 11:07:49.212490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.432 [2024-07-12 11:07:49.212495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.432 [2024-07-12 11:07:49.212509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.432 qpair failed and we were unable to recover it. 00:29:32.432 [2024-07-12 11:07:49.222478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.432 [2024-07-12 11:07:49.222581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.432 [2024-07-12 11:07:49.222597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.432 [2024-07-12 11:07:49.222604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.432 [2024-07-12 11:07:49.222610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.432 [2024-07-12 11:07:49.222625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.432 qpair failed and we were unable to recover it. 00:29:32.432 [2024-07-12 11:07:49.232491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.432 [2024-07-12 11:07:49.232556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.432 [2024-07-12 11:07:49.232575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.432 [2024-07-12 11:07:49.232582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.432 [2024-07-12 11:07:49.232588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.433 [2024-07-12 11:07:49.232602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.433 qpair failed and we were unable to recover it. 00:29:32.433 [2024-07-12 11:07:49.242403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.433 [2024-07-12 11:07:49.242513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.433 [2024-07-12 11:07:49.242528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.433 [2024-07-12 11:07:49.242536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.433 [2024-07-12 11:07:49.242542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.433 [2024-07-12 11:07:49.242556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.433 qpair failed and we were unable to recover it. 00:29:32.433 [2024-07-12 11:07:49.252448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.433 [2024-07-12 11:07:49.252512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.433 [2024-07-12 11:07:49.252527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.433 [2024-07-12 11:07:49.252534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.433 [2024-07-12 11:07:49.252540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.433 [2024-07-12 11:07:49.252554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.433 qpair failed and we were unable to recover it. 00:29:32.433 [2024-07-12 11:07:49.262617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.433 [2024-07-12 11:07:49.262680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.433 [2024-07-12 11:07:49.262696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.433 [2024-07-12 11:07:49.262702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.433 [2024-07-12 11:07:49.262709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.433 [2024-07-12 11:07:49.262726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.433 qpair failed and we were unable to recover it. 00:29:32.433 [2024-07-12 11:07:49.272611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.433 [2024-07-12 11:07:49.272675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.433 [2024-07-12 11:07:49.272691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.433 [2024-07-12 11:07:49.272698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.433 [2024-07-12 11:07:49.272708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.433 [2024-07-12 11:07:49.272722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.433 qpair failed and we were unable to recover it. 00:29:32.433 [2024-07-12 11:07:49.282615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.433 [2024-07-12 11:07:49.282679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.433 [2024-07-12 11:07:49.282694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.433 [2024-07-12 11:07:49.282701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.433 [2024-07-12 11:07:49.282707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.433 [2024-07-12 11:07:49.282721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.433 qpair failed and we were unable to recover it. 00:29:32.433 [2024-07-12 11:07:49.292689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.433 [2024-07-12 11:07:49.292803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.433 [2024-07-12 11:07:49.292819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.433 [2024-07-12 11:07:49.292825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.433 [2024-07-12 11:07:49.292831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.433 [2024-07-12 11:07:49.292846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.433 qpair failed and we were unable to recover it. 00:29:32.433 [2024-07-12 11:07:49.302643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.433 [2024-07-12 11:07:49.302721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.433 [2024-07-12 11:07:49.302736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.433 [2024-07-12 11:07:49.302743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.433 [2024-07-12 11:07:49.302749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.433 [2024-07-12 11:07:49.302763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.433 qpair failed and we were unable to recover it. 00:29:32.433 [2024-07-12 11:07:49.312696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.433 [2024-07-12 11:07:49.312759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.433 [2024-07-12 11:07:49.312774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.433 [2024-07-12 11:07:49.312781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.433 [2024-07-12 11:07:49.312787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.433 [2024-07-12 11:07:49.312801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.433 qpair failed and we were unable to recover it. 00:29:32.433 [2024-07-12 11:07:49.322705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.433 [2024-07-12 11:07:49.322790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.433 [2024-07-12 11:07:49.322805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.433 [2024-07-12 11:07:49.322812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.433 [2024-07-12 11:07:49.322818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.433 [2024-07-12 11:07:49.322832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.433 qpair failed and we were unable to recover it. 00:29:32.433 [2024-07-12 11:07:49.332628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.433 [2024-07-12 11:07:49.332707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.433 [2024-07-12 11:07:49.332722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.433 [2024-07-12 11:07:49.332729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.433 [2024-07-12 11:07:49.332735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.433 [2024-07-12 11:07:49.332750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.433 qpair failed and we were unable to recover it. 00:29:32.433 [2024-07-12 11:07:49.342751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.433 [2024-07-12 11:07:49.342816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.433 [2024-07-12 11:07:49.342831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.433 [2024-07-12 11:07:49.342838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.433 [2024-07-12 11:07:49.342844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.433 [2024-07-12 11:07:49.342858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.433 qpair failed and we were unable to recover it. 00:29:32.433 [2024-07-12 11:07:49.352790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.433 [2024-07-12 11:07:49.352855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.433 [2024-07-12 11:07:49.352870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.433 [2024-07-12 11:07:49.352877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.433 [2024-07-12 11:07:49.352884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.433 [2024-07-12 11:07:49.352898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.433 qpair failed and we were unable to recover it. 00:29:32.433 [2024-07-12 11:07:49.362800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.433 [2024-07-12 11:07:49.362900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.433 [2024-07-12 11:07:49.362918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.433 [2024-07-12 11:07:49.362926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.433 [2024-07-12 11:07:49.362939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.433 [2024-07-12 11:07:49.362955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.433 qpair failed and we were unable to recover it. 00:29:32.433 [2024-07-12 11:07:49.372829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.433 [2024-07-12 11:07:49.372904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.433 [2024-07-12 11:07:49.372920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.433 [2024-07-12 11:07:49.372928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.433 [2024-07-12 11:07:49.372934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.433 [2024-07-12 11:07:49.372949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.433 qpair failed and we were unable to recover it. 00:29:32.433 [2024-07-12 11:07:49.382872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.434 [2024-07-12 11:07:49.382942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.434 [2024-07-12 11:07:49.382967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.434 [2024-07-12 11:07:49.382976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.434 [2024-07-12 11:07:49.382983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.434 [2024-07-12 11:07:49.383002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.434 qpair failed and we were unable to recover it. 00:29:32.434 [2024-07-12 11:07:49.392895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.434 [2024-07-12 11:07:49.392957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.434 [2024-07-12 11:07:49.392975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.434 [2024-07-12 11:07:49.392982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.434 [2024-07-12 11:07:49.392988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.434 [2024-07-12 11:07:49.393004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.434 qpair failed and we were unable to recover it. 00:29:32.434 [2024-07-12 11:07:49.402949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.434 [2024-07-12 11:07:49.403019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.434 [2024-07-12 11:07:49.403035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.434 [2024-07-12 11:07:49.403042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.434 [2024-07-12 11:07:49.403048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.434 [2024-07-12 11:07:49.403063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.434 qpair failed and we were unable to recover it. 00:29:32.434 [2024-07-12 11:07:49.412905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.434 [2024-07-12 11:07:49.412974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.434 [2024-07-12 11:07:49.412991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.434 [2024-07-12 11:07:49.412998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.434 [2024-07-12 11:07:49.413004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.434 [2024-07-12 11:07:49.413018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.434 qpair failed and we were unable to recover it. 00:29:32.694 [2024-07-12 11:07:49.422960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.694 [2024-07-12 11:07:49.423025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.694 [2024-07-12 11:07:49.423041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.694 [2024-07-12 11:07:49.423048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.694 [2024-07-12 11:07:49.423054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.694 [2024-07-12 11:07:49.423069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.694 qpair failed and we were unable to recover it. 00:29:32.694 [2024-07-12 11:07:49.432993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.694 [2024-07-12 11:07:49.433058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.694 [2024-07-12 11:07:49.433073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.694 [2024-07-12 11:07:49.433080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.694 [2024-07-12 11:07:49.433086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.694 [2024-07-12 11:07:49.433101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.694 qpair failed and we were unable to recover it. 00:29:32.694 [2024-07-12 11:07:49.443018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.694 [2024-07-12 11:07:49.443115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.694 [2024-07-12 11:07:49.443134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.694 [2024-07-12 11:07:49.443142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.694 [2024-07-12 11:07:49.443148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.694 [2024-07-12 11:07:49.443163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.694 qpair failed and we were unable to recover it. 00:29:32.694 [2024-07-12 11:07:49.453088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.694 [2024-07-12 11:07:49.453164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.694 [2024-07-12 11:07:49.453180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.694 [2024-07-12 11:07:49.453191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.694 [2024-07-12 11:07:49.453197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.694 [2024-07-12 11:07:49.453212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.694 qpair failed and we were unable to recover it. 00:29:32.694 [2024-07-12 11:07:49.463118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.694 [2024-07-12 11:07:49.463215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.694 [2024-07-12 11:07:49.463231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.694 [2024-07-12 11:07:49.463238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.694 [2024-07-12 11:07:49.463244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.694 [2024-07-12 11:07:49.463258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.694 qpair failed and we were unable to recover it. 00:29:32.694 [2024-07-12 11:07:49.473093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.694 [2024-07-12 11:07:49.473158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.694 [2024-07-12 11:07:49.473174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.694 [2024-07-12 11:07:49.473181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.694 [2024-07-12 11:07:49.473187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.694 [2024-07-12 11:07:49.473202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.694 qpair failed and we were unable to recover it. 00:29:32.694 [2024-07-12 11:07:49.483180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.695 [2024-07-12 11:07:49.483245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.695 [2024-07-12 11:07:49.483260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.695 [2024-07-12 11:07:49.483267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.695 [2024-07-12 11:07:49.483273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.695 [2024-07-12 11:07:49.483287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.695 qpair failed and we were unable to recover it. 00:29:32.695 [2024-07-12 11:07:49.493081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.695 [2024-07-12 11:07:49.493157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.695 [2024-07-12 11:07:49.493172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.695 [2024-07-12 11:07:49.493179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.695 [2024-07-12 11:07:49.493186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.695 [2024-07-12 11:07:49.493200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.695 qpair failed and we were unable to recover it. 00:29:32.695 [2024-07-12 11:07:49.503188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.695 [2024-07-12 11:07:49.503251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.695 [2024-07-12 11:07:49.503267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.695 [2024-07-12 11:07:49.503274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.695 [2024-07-12 11:07:49.503280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.695 [2024-07-12 11:07:49.503294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.695 qpair failed and we were unable to recover it. 00:29:32.695 [2024-07-12 11:07:49.513219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.695 [2024-07-12 11:07:49.513290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.695 [2024-07-12 11:07:49.513306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.695 [2024-07-12 11:07:49.513313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.695 [2024-07-12 11:07:49.513319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.695 [2024-07-12 11:07:49.513334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.695 qpair failed and we were unable to recover it. 00:29:32.695 [2024-07-12 11:07:49.523295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.695 [2024-07-12 11:07:49.523357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.695 [2024-07-12 11:07:49.523373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.695 [2024-07-12 11:07:49.523380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.695 [2024-07-12 11:07:49.523386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.695 [2024-07-12 11:07:49.523400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.695 qpair failed and we were unable to recover it. 00:29:32.695 [2024-07-12 11:07:49.533274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.695 [2024-07-12 11:07:49.533344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.695 [2024-07-12 11:07:49.533359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.695 [2024-07-12 11:07:49.533366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.695 [2024-07-12 11:07:49.533372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.695 [2024-07-12 11:07:49.533386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.695 qpair failed and we were unable to recover it. 00:29:32.695 [2024-07-12 11:07:49.543301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.695 [2024-07-12 11:07:49.543394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.695 [2024-07-12 11:07:49.543414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.695 [2024-07-12 11:07:49.543421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.695 [2024-07-12 11:07:49.543427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.695 [2024-07-12 11:07:49.543441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.695 qpair failed and we were unable to recover it. 00:29:32.695 [2024-07-12 11:07:49.553355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.695 [2024-07-12 11:07:49.553421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.695 [2024-07-12 11:07:49.553437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.695 [2024-07-12 11:07:49.553444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.695 [2024-07-12 11:07:49.553449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.695 [2024-07-12 11:07:49.553464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.695 qpair failed and we were unable to recover it. 00:29:32.695 [2024-07-12 11:07:49.563403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.695 [2024-07-12 11:07:49.563474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.695 [2024-07-12 11:07:49.563490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.695 [2024-07-12 11:07:49.563497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.695 [2024-07-12 11:07:49.563503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.695 [2024-07-12 11:07:49.563517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.695 qpair failed and we were unable to recover it. 00:29:32.695 [2024-07-12 11:07:49.573394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.695 [2024-07-12 11:07:49.573465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.695 [2024-07-12 11:07:49.573480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.695 [2024-07-12 11:07:49.573487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.695 [2024-07-12 11:07:49.573493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.695 [2024-07-12 11:07:49.573507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.695 qpair failed and we were unable to recover it. 00:29:32.695 [2024-07-12 11:07:49.583510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.695 [2024-07-12 11:07:49.583571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.695 [2024-07-12 11:07:49.583586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.695 [2024-07-12 11:07:49.583593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.695 [2024-07-12 11:07:49.583599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.695 [2024-07-12 11:07:49.583617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.695 qpair failed and we were unable to recover it. 00:29:32.695 [2024-07-12 11:07:49.593480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.695 [2024-07-12 11:07:49.593549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.695 [2024-07-12 11:07:49.593565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.695 [2024-07-12 11:07:49.593572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.695 [2024-07-12 11:07:49.593578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.695 [2024-07-12 11:07:49.593592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.695 qpair failed and we were unable to recover it. 00:29:32.695 [2024-07-12 11:07:49.603453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.695 [2024-07-12 11:07:49.603520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.695 [2024-07-12 11:07:49.603535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.695 [2024-07-12 11:07:49.603542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.695 [2024-07-12 11:07:49.603547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.695 [2024-07-12 11:07:49.603561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.695 qpair failed and we were unable to recover it. 00:29:32.695 [2024-07-12 11:07:49.613498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.695 [2024-07-12 11:07:49.613565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.695 [2024-07-12 11:07:49.613580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.695 [2024-07-12 11:07:49.613587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.695 [2024-07-12 11:07:49.613593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.695 [2024-07-12 11:07:49.613607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.695 qpair failed and we were unable to recover it. 00:29:32.695 [2024-07-12 11:07:49.623495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.695 [2024-07-12 11:07:49.623566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.695 [2024-07-12 11:07:49.623581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.695 [2024-07-12 11:07:49.623588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.695 [2024-07-12 11:07:49.623594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.695 [2024-07-12 11:07:49.623608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.695 qpair failed and we were unable to recover it. 00:29:32.695 [2024-07-12 11:07:49.633533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.695 [2024-07-12 11:07:49.633599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.695 [2024-07-12 11:07:49.633618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.695 [2024-07-12 11:07:49.633625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.695 [2024-07-12 11:07:49.633631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.695 [2024-07-12 11:07:49.633646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.695 qpair failed and we were unable to recover it. 00:29:32.695 [2024-07-12 11:07:49.643575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.695 [2024-07-12 11:07:49.643690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.695 [2024-07-12 11:07:49.643705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.695 [2024-07-12 11:07:49.643712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.695 [2024-07-12 11:07:49.643718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.695 [2024-07-12 11:07:49.643733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.695 qpair failed and we were unable to recover it. 00:29:32.695 [2024-07-12 11:07:49.653477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.695 [2024-07-12 11:07:49.653548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.695 [2024-07-12 11:07:49.653565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.695 [2024-07-12 11:07:49.653572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.695 [2024-07-12 11:07:49.653578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.695 [2024-07-12 11:07:49.653594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.695 qpair failed and we were unable to recover it. 00:29:32.695 [2024-07-12 11:07:49.663609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.695 [2024-07-12 11:07:49.663676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.695 [2024-07-12 11:07:49.663692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.695 [2024-07-12 11:07:49.663699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.695 [2024-07-12 11:07:49.663705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.695 [2024-07-12 11:07:49.663720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.695 qpair failed and we were unable to recover it. 00:29:32.695 [2024-07-12 11:07:49.673635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.695 [2024-07-12 11:07:49.673738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.695 [2024-07-12 11:07:49.673754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.695 [2024-07-12 11:07:49.673761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.695 [2024-07-12 11:07:49.673767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.695 [2024-07-12 11:07:49.673785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.695 qpair failed and we were unable to recover it. 00:29:32.957 [2024-07-12 11:07:49.683673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.957 [2024-07-12 11:07:49.683736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.957 [2024-07-12 11:07:49.683751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.957 [2024-07-12 11:07:49.683758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.957 [2024-07-12 11:07:49.683765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.957 [2024-07-12 11:07:49.683779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.957 qpair failed and we were unable to recover it. 00:29:32.957 [2024-07-12 11:07:49.693682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.957 [2024-07-12 11:07:49.693761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.957 [2024-07-12 11:07:49.693778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.957 [2024-07-12 11:07:49.693785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.957 [2024-07-12 11:07:49.693791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.957 [2024-07-12 11:07:49.693807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.957 qpair failed and we were unable to recover it. 00:29:32.957 [2024-07-12 11:07:49.703615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.957 [2024-07-12 11:07:49.703797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.957 [2024-07-12 11:07:49.703812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.957 [2024-07-12 11:07:49.703819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.957 [2024-07-12 11:07:49.703825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.957 [2024-07-12 11:07:49.703840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.957 qpair failed and we were unable to recover it. 00:29:32.957 [2024-07-12 11:07:49.713826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.957 [2024-07-12 11:07:49.713973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.957 [2024-07-12 11:07:49.713998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.957 [2024-07-12 11:07:49.714007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.957 [2024-07-12 11:07:49.714013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.957 [2024-07-12 11:07:49.714032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.957 qpair failed and we were unable to recover it. 00:29:32.957 [2024-07-12 11:07:49.723776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.957 [2024-07-12 11:07:49.723844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.957 [2024-07-12 11:07:49.723861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.957 [2024-07-12 11:07:49.723868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.957 [2024-07-12 11:07:49.723874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.957 [2024-07-12 11:07:49.723890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.957 qpair failed and we were unable to recover it. 00:29:32.957 [2024-07-12 11:07:49.733799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.957 [2024-07-12 11:07:49.733865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.957 [2024-07-12 11:07:49.733881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.957 [2024-07-12 11:07:49.733888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.957 [2024-07-12 11:07:49.733894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.957 [2024-07-12 11:07:49.733908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.957 qpair failed and we were unable to recover it. 00:29:32.957 [2024-07-12 11:07:49.743836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.957 [2024-07-12 11:07:49.743900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.957 [2024-07-12 11:07:49.743916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.957 [2024-07-12 11:07:49.743923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.957 [2024-07-12 11:07:49.743929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.957 [2024-07-12 11:07:49.743944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.957 qpair failed and we were unable to recover it. 00:29:32.957 [2024-07-12 11:07:49.753835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.957 [2024-07-12 11:07:49.753900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.957 [2024-07-12 11:07:49.753916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.957 [2024-07-12 11:07:49.753923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.957 [2024-07-12 11:07:49.753929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.957 [2024-07-12 11:07:49.753944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.957 qpair failed and we were unable to recover it. 00:29:32.957 [2024-07-12 11:07:49.763912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.957 [2024-07-12 11:07:49.763977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.957 [2024-07-12 11:07:49.763993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.957 [2024-07-12 11:07:49.764000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.957 [2024-07-12 11:07:49.764010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.957 [2024-07-12 11:07:49.764025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.957 qpair failed and we were unable to recover it. 00:29:32.957 [2024-07-12 11:07:49.773913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.957 [2024-07-12 11:07:49.773987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.957 [2024-07-12 11:07:49.774002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.957 [2024-07-12 11:07:49.774009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.957 [2024-07-12 11:07:49.774015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.957 [2024-07-12 11:07:49.774029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.957 qpair failed and we were unable to recover it. 00:29:32.957 [2024-07-12 11:07:49.783941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.957 [2024-07-12 11:07:49.784000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.957 [2024-07-12 11:07:49.784016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.957 [2024-07-12 11:07:49.784023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.957 [2024-07-12 11:07:49.784029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.957 [2024-07-12 11:07:49.784044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.957 qpair failed and we were unable to recover it. 00:29:32.957 [2024-07-12 11:07:49.793945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.957 [2024-07-12 11:07:49.794006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.957 [2024-07-12 11:07:49.794022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.957 [2024-07-12 11:07:49.794029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.957 [2024-07-12 11:07:49.794035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.957 [2024-07-12 11:07:49.794049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.957 qpair failed and we were unable to recover it. 00:29:32.957 [2024-07-12 11:07:49.804012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.957 [2024-07-12 11:07:49.804076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.958 [2024-07-12 11:07:49.804091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.958 [2024-07-12 11:07:49.804098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.958 [2024-07-12 11:07:49.804104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.958 [2024-07-12 11:07:49.804119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.958 qpair failed and we were unable to recover it. 00:29:32.958 [2024-07-12 11:07:49.814010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.958 [2024-07-12 11:07:49.814085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.958 [2024-07-12 11:07:49.814100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.958 [2024-07-12 11:07:49.814107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.958 [2024-07-12 11:07:49.814113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.958 [2024-07-12 11:07:49.814132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.958 qpair failed and we were unable to recover it. 00:29:32.958 [2024-07-12 11:07:49.824051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.958 [2024-07-12 11:07:49.824118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.958 [2024-07-12 11:07:49.824138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.958 [2024-07-12 11:07:49.824145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.958 [2024-07-12 11:07:49.824151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.958 [2024-07-12 11:07:49.824165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.958 qpair failed and we were unable to recover it. 00:29:32.958 [2024-07-12 11:07:49.834020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.958 [2024-07-12 11:07:49.834084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.958 [2024-07-12 11:07:49.834099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.958 [2024-07-12 11:07:49.834106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.958 [2024-07-12 11:07:49.834112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.958 [2024-07-12 11:07:49.834132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.958 qpair failed and we were unable to recover it. 00:29:32.958 [2024-07-12 11:07:49.844105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.958 [2024-07-12 11:07:49.844169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.958 [2024-07-12 11:07:49.844185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.958 [2024-07-12 11:07:49.844192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.958 [2024-07-12 11:07:49.844198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.958 [2024-07-12 11:07:49.844212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.958 qpair failed and we were unable to recover it. 00:29:32.958 [2024-07-12 11:07:49.854070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.958 [2024-07-12 11:07:49.854170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.958 [2024-07-12 11:07:49.854186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.958 [2024-07-12 11:07:49.854196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.958 [2024-07-12 11:07:49.854203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.958 [2024-07-12 11:07:49.854217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.958 qpair failed and we were unable to recover it. 00:29:32.958 [2024-07-12 11:07:49.864053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.958 [2024-07-12 11:07:49.864163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.958 [2024-07-12 11:07:49.864180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.958 [2024-07-12 11:07:49.864187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.958 [2024-07-12 11:07:49.864193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.958 [2024-07-12 11:07:49.864208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.958 qpair failed and we were unable to recover it. 00:29:32.958 [2024-07-12 11:07:49.874231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.958 [2024-07-12 11:07:49.874297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.958 [2024-07-12 11:07:49.874316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.958 [2024-07-12 11:07:49.874324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.958 [2024-07-12 11:07:49.874330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.958 [2024-07-12 11:07:49.874346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.958 qpair failed and we were unable to recover it. 00:29:32.958 [2024-07-12 11:07:49.884105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.958 [2024-07-12 11:07:49.884171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.958 [2024-07-12 11:07:49.884188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.958 [2024-07-12 11:07:49.884195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.958 [2024-07-12 11:07:49.884201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.958 [2024-07-12 11:07:49.884216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.958 qpair failed and we were unable to recover it. 00:29:32.958 [2024-07-12 11:07:49.894274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.958 [2024-07-12 11:07:49.894347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.958 [2024-07-12 11:07:49.894363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.958 [2024-07-12 11:07:49.894370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.958 [2024-07-12 11:07:49.894376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.958 [2024-07-12 11:07:49.894391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.958 qpair failed and we were unable to recover it. 00:29:32.958 [2024-07-12 11:07:49.904264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.958 [2024-07-12 11:07:49.904326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.958 [2024-07-12 11:07:49.904342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.958 [2024-07-12 11:07:49.904349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.958 [2024-07-12 11:07:49.904355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.958 [2024-07-12 11:07:49.904369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.958 qpair failed and we were unable to recover it. 00:29:32.958 [2024-07-12 11:07:49.914325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.958 [2024-07-12 11:07:49.914405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.958 [2024-07-12 11:07:49.914420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.958 [2024-07-12 11:07:49.914427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.958 [2024-07-12 11:07:49.914433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.958 [2024-07-12 11:07:49.914448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.958 qpair failed and we were unable to recover it. 00:29:32.958 [2024-07-12 11:07:49.924294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.958 [2024-07-12 11:07:49.924351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.958 [2024-07-12 11:07:49.924365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.958 [2024-07-12 11:07:49.924372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.958 [2024-07-12 11:07:49.924378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.958 [2024-07-12 11:07:49.924392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.958 qpair failed and we were unable to recover it. 00:29:32.958 [2024-07-12 11:07:49.934320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.958 [2024-07-12 11:07:49.934390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.958 [2024-07-12 11:07:49.934405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.958 [2024-07-12 11:07:49.934412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.958 [2024-07-12 11:07:49.934418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:32.958 [2024-07-12 11:07:49.934432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:32.958 qpair failed and we were unable to recover it. 00:29:33.220 [2024-07-12 11:07:49.944394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.220 [2024-07-12 11:07:49.944474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.220 [2024-07-12 11:07:49.944490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.220 [2024-07-12 11:07:49.944500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.220 [2024-07-12 11:07:49.944506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.220 [2024-07-12 11:07:49.944521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.220 qpair failed and we were unable to recover it. 00:29:33.220 [2024-07-12 11:07:49.954401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.220 [2024-07-12 11:07:49.954500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.220 [2024-07-12 11:07:49.954515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.220 [2024-07-12 11:07:49.954522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.220 [2024-07-12 11:07:49.954529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.221 [2024-07-12 11:07:49.954543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.221 qpair failed and we were unable to recover it. 00:29:33.221 [2024-07-12 11:07:49.964425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.221 [2024-07-12 11:07:49.964488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.221 [2024-07-12 11:07:49.964504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.221 [2024-07-12 11:07:49.964511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.221 [2024-07-12 11:07:49.964517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.221 [2024-07-12 11:07:49.964531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.221 qpair failed and we were unable to recover it. 00:29:33.221 [2024-07-12 11:07:49.974445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.221 [2024-07-12 11:07:49.974551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.221 [2024-07-12 11:07:49.974566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.221 [2024-07-12 11:07:49.974573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.221 [2024-07-12 11:07:49.974579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.221 [2024-07-12 11:07:49.974593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.221 qpair failed and we were unable to recover it. 00:29:33.221 [2024-07-12 11:07:49.984481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.221 [2024-07-12 11:07:49.984541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.221 [2024-07-12 11:07:49.984556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.221 [2024-07-12 11:07:49.984563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.221 [2024-07-12 11:07:49.984569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.221 [2024-07-12 11:07:49.984583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.221 qpair failed and we were unable to recover it. 00:29:33.221 [2024-07-12 11:07:49.994509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.221 [2024-07-12 11:07:49.994570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.221 [2024-07-12 11:07:49.994585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.221 [2024-07-12 11:07:49.994592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.221 [2024-07-12 11:07:49.994598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.221 [2024-07-12 11:07:49.994613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.221 qpair failed and we were unable to recover it. 00:29:33.221 [2024-07-12 11:07:50.004531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.221 [2024-07-12 11:07:50.004606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.221 [2024-07-12 11:07:50.004622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.221 [2024-07-12 11:07:50.004629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.221 [2024-07-12 11:07:50.004636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.221 [2024-07-12 11:07:50.004650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.221 qpair failed and we were unable to recover it. 00:29:33.221 [2024-07-12 11:07:50.014540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.221 [2024-07-12 11:07:50.014612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.221 [2024-07-12 11:07:50.014627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.221 [2024-07-12 11:07:50.014634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.221 [2024-07-12 11:07:50.014640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.221 [2024-07-12 11:07:50.014655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.221 qpair failed and we were unable to recover it. 00:29:33.221 [2024-07-12 11:07:50.024627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.221 [2024-07-12 11:07:50.024724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.221 [2024-07-12 11:07:50.024744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.221 [2024-07-12 11:07:50.024752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.221 [2024-07-12 11:07:50.024759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.221 [2024-07-12 11:07:50.024776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.221 qpair failed and we were unable to recover it. 00:29:33.221 [2024-07-12 11:07:50.034596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.221 [2024-07-12 11:07:50.034665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.221 [2024-07-12 11:07:50.034685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.221 [2024-07-12 11:07:50.034692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.221 [2024-07-12 11:07:50.034699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.221 [2024-07-12 11:07:50.034714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.221 qpair failed and we were unable to recover it. 00:29:33.221 [2024-07-12 11:07:50.044672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.221 [2024-07-12 11:07:50.044751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.221 [2024-07-12 11:07:50.044767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.221 [2024-07-12 11:07:50.044774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.221 [2024-07-12 11:07:50.044780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.221 [2024-07-12 11:07:50.044795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.221 qpair failed and we were unable to recover it. 00:29:33.221 [2024-07-12 11:07:50.054668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.221 [2024-07-12 11:07:50.054740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.221 [2024-07-12 11:07:50.054755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.221 [2024-07-12 11:07:50.054763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.221 [2024-07-12 11:07:50.054770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.221 [2024-07-12 11:07:50.054785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.221 qpair failed and we were unable to recover it. 00:29:33.221 [2024-07-12 11:07:50.064694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.221 [2024-07-12 11:07:50.064757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.221 [2024-07-12 11:07:50.064773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.221 [2024-07-12 11:07:50.064780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.221 [2024-07-12 11:07:50.064786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.221 [2024-07-12 11:07:50.064801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.221 qpair failed and we were unable to recover it. 00:29:33.221 [2024-07-12 11:07:50.074705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.221 [2024-07-12 11:07:50.074765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.221 [2024-07-12 11:07:50.074781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.221 [2024-07-12 11:07:50.074788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.221 [2024-07-12 11:07:50.074794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.221 [2024-07-12 11:07:50.074812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.221 qpair failed and we were unable to recover it. 00:29:33.221 [2024-07-12 11:07:50.084722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.221 [2024-07-12 11:07:50.084867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.221 [2024-07-12 11:07:50.084908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.221 [2024-07-12 11:07:50.084918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.221 [2024-07-12 11:07:50.084927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.221 [2024-07-12 11:07:50.084952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.221 qpair failed and we were unable to recover it. 00:29:33.221 [2024-07-12 11:07:50.094767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.221 [2024-07-12 11:07:50.094836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.221 [2024-07-12 11:07:50.094851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.222 [2024-07-12 11:07:50.094858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.222 [2024-07-12 11:07:50.094865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.222 [2024-07-12 11:07:50.094880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.222 qpair failed and we were unable to recover it. 00:29:33.222 [2024-07-12 11:07:50.104726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.222 [2024-07-12 11:07:50.104824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.222 [2024-07-12 11:07:50.104839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.222 [2024-07-12 11:07:50.104847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.222 [2024-07-12 11:07:50.104853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.222 [2024-07-12 11:07:50.104867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.222 qpair failed and we were unable to recover it. 00:29:33.222 [2024-07-12 11:07:50.114824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.222 [2024-07-12 11:07:50.114893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.222 [2024-07-12 11:07:50.114908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.222 [2024-07-12 11:07:50.114915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.222 [2024-07-12 11:07:50.114922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.222 [2024-07-12 11:07:50.114937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.222 qpair failed and we were unable to recover it. 00:29:33.222 [2024-07-12 11:07:50.124856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.222 [2024-07-12 11:07:50.124923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.222 [2024-07-12 11:07:50.124945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.222 [2024-07-12 11:07:50.124952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.222 [2024-07-12 11:07:50.124959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.222 [2024-07-12 11:07:50.124973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.222 qpair failed and we were unable to recover it. 00:29:33.222 [2024-07-12 11:07:50.134882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.222 [2024-07-12 11:07:50.134948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.222 [2024-07-12 11:07:50.134964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.222 [2024-07-12 11:07:50.134971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.222 [2024-07-12 11:07:50.134977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.222 [2024-07-12 11:07:50.134991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.222 qpair failed and we were unable to recover it. 00:29:33.222 [2024-07-12 11:07:50.144951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.222 [2024-07-12 11:07:50.145030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.222 [2024-07-12 11:07:50.145047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.222 [2024-07-12 11:07:50.145055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.222 [2024-07-12 11:07:50.145061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.222 [2024-07-12 11:07:50.145076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.222 qpair failed and we were unable to recover it. 00:29:33.222 [2024-07-12 11:07:50.154835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.222 [2024-07-12 11:07:50.154896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.222 [2024-07-12 11:07:50.154912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.222 [2024-07-12 11:07:50.154920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.222 [2024-07-12 11:07:50.154926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.222 [2024-07-12 11:07:50.154941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.222 qpair failed and we were unable to recover it. 00:29:33.222 [2024-07-12 11:07:50.165007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.222 [2024-07-12 11:07:50.165076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.222 [2024-07-12 11:07:50.165095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.222 [2024-07-12 11:07:50.165103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.222 [2024-07-12 11:07:50.165114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.222 [2024-07-12 11:07:50.165135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.222 qpair failed and we were unable to recover it. 00:29:33.222 [2024-07-12 11:07:50.174980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.222 [2024-07-12 11:07:50.175052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.222 [2024-07-12 11:07:50.175068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.222 [2024-07-12 11:07:50.175075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.222 [2024-07-12 11:07:50.175081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.222 [2024-07-12 11:07:50.175096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.222 qpair failed and we were unable to recover it. 00:29:33.222 [2024-07-12 11:07:50.184908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.222 [2024-07-12 11:07:50.184974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.222 [2024-07-12 11:07:50.184989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.222 [2024-07-12 11:07:50.184997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.222 [2024-07-12 11:07:50.185003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.222 [2024-07-12 11:07:50.185018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.222 qpair failed and we were unable to recover it. 00:29:33.222 [2024-07-12 11:07:50.195059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.222 [2024-07-12 11:07:50.195129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.222 [2024-07-12 11:07:50.195145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.222 [2024-07-12 11:07:50.195152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.222 [2024-07-12 11:07:50.195158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.222 [2024-07-12 11:07:50.195173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.222 qpair failed and we were unable to recover it. 00:29:33.484 [2024-07-12 11:07:50.205053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.484 [2024-07-12 11:07:50.205118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.484 [2024-07-12 11:07:50.205137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.484 [2024-07-12 11:07:50.205145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.484 [2024-07-12 11:07:50.205151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.484 [2024-07-12 11:07:50.205166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-07-12 11:07:50.215092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.484 [2024-07-12 11:07:50.215231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.485 [2024-07-12 11:07:50.215249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.485 [2024-07-12 11:07:50.215256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.485 [2024-07-12 11:07:50.215262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.485 [2024-07-12 11:07:50.215279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-07-12 11:07:50.225107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.485 [2024-07-12 11:07:50.225176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.485 [2024-07-12 11:07:50.225195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.485 [2024-07-12 11:07:50.225203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.485 [2024-07-12 11:07:50.225209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.485 [2024-07-12 11:07:50.225225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-07-12 11:07:50.235032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.485 [2024-07-12 11:07:50.235108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.485 [2024-07-12 11:07:50.235128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.485 [2024-07-12 11:07:50.235135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.485 [2024-07-12 11:07:50.235141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.485 [2024-07-12 11:07:50.235157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-07-12 11:07:50.245199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.485 [2024-07-12 11:07:50.245263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.485 [2024-07-12 11:07:50.245278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.485 [2024-07-12 11:07:50.245286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.485 [2024-07-12 11:07:50.245292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.485 [2024-07-12 11:07:50.245306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-07-12 11:07:50.255102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.485 [2024-07-12 11:07:50.255236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.485 [2024-07-12 11:07:50.255252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.485 [2024-07-12 11:07:50.255263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.485 [2024-07-12 11:07:50.255269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.485 [2024-07-12 11:07:50.255284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-07-12 11:07:50.265262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.485 [2024-07-12 11:07:50.265329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.485 [2024-07-12 11:07:50.265345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.485 [2024-07-12 11:07:50.265352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.485 [2024-07-12 11:07:50.265358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.485 [2024-07-12 11:07:50.265373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-07-12 11:07:50.275271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.485 [2024-07-12 11:07:50.275338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.485 [2024-07-12 11:07:50.275353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.485 [2024-07-12 11:07:50.275360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.485 [2024-07-12 11:07:50.275366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.485 [2024-07-12 11:07:50.275381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-07-12 11:07:50.285307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.485 [2024-07-12 11:07:50.285372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.485 [2024-07-12 11:07:50.285387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.485 [2024-07-12 11:07:50.285394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.485 [2024-07-12 11:07:50.285400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.485 [2024-07-12 11:07:50.285414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-07-12 11:07:50.295446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.485 [2024-07-12 11:07:50.295515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.485 [2024-07-12 11:07:50.295530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.485 [2024-07-12 11:07:50.295537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.485 [2024-07-12 11:07:50.295543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.485 [2024-07-12 11:07:50.295558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-07-12 11:07:50.305381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.485 [2024-07-12 11:07:50.305448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.485 [2024-07-12 11:07:50.305463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.485 [2024-07-12 11:07:50.305470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.485 [2024-07-12 11:07:50.305476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.485 [2024-07-12 11:07:50.305491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-07-12 11:07:50.315374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.485 [2024-07-12 11:07:50.315436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.485 [2024-07-12 11:07:50.315452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.485 [2024-07-12 11:07:50.315459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.485 [2024-07-12 11:07:50.315465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.485 [2024-07-12 11:07:50.315479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.486 [2024-07-12 11:07:50.325411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.486 [2024-07-12 11:07:50.325479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.486 [2024-07-12 11:07:50.325494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.486 [2024-07-12 11:07:50.325501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.486 [2024-07-12 11:07:50.325507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.486 [2024-07-12 11:07:50.325521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-07-12 11:07:50.335415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.486 [2024-07-12 11:07:50.335482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.486 [2024-07-12 11:07:50.335496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.486 [2024-07-12 11:07:50.335503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.486 [2024-07-12 11:07:50.335509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.486 [2024-07-12 11:07:50.335524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-07-12 11:07:50.345458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.486 [2024-07-12 11:07:50.345524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.486 [2024-07-12 11:07:50.345540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.486 [2024-07-12 11:07:50.345550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.486 [2024-07-12 11:07:50.345556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.486 [2024-07-12 11:07:50.345571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-07-12 11:07:50.355356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.486 [2024-07-12 11:07:50.355429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.486 [2024-07-12 11:07:50.355444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.486 [2024-07-12 11:07:50.355451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.486 [2024-07-12 11:07:50.355457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.486 [2024-07-12 11:07:50.355472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-07-12 11:07:50.365490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.486 [2024-07-12 11:07:50.365554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.486 [2024-07-12 11:07:50.365569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.486 [2024-07-12 11:07:50.365576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.486 [2024-07-12 11:07:50.365582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.486 [2024-07-12 11:07:50.365595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-07-12 11:07:50.375577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.486 [2024-07-12 11:07:50.375647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.486 [2024-07-12 11:07:50.375663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.486 [2024-07-12 11:07:50.375670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.486 [2024-07-12 11:07:50.375676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.486 [2024-07-12 11:07:50.375691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-07-12 11:07:50.385569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.486 [2024-07-12 11:07:50.385671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.486 [2024-07-12 11:07:50.385687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.486 [2024-07-12 11:07:50.385694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.486 [2024-07-12 11:07:50.385700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.486 [2024-07-12 11:07:50.385714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-07-12 11:07:50.395579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.486 [2024-07-12 11:07:50.395645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.486 [2024-07-12 11:07:50.395661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.486 [2024-07-12 11:07:50.395668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.486 [2024-07-12 11:07:50.395674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.486 [2024-07-12 11:07:50.395689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-07-12 11:07:50.405616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.486 [2024-07-12 11:07:50.405716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.486 [2024-07-12 11:07:50.405732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.486 [2024-07-12 11:07:50.405740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.486 [2024-07-12 11:07:50.405746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.486 [2024-07-12 11:07:50.405761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-07-12 11:07:50.415597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.486 [2024-07-12 11:07:50.415670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.486 [2024-07-12 11:07:50.415685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.486 [2024-07-12 11:07:50.415692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.486 [2024-07-12 11:07:50.415698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.486 [2024-07-12 11:07:50.415713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-07-12 11:07:50.425671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.486 [2024-07-12 11:07:50.425736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.486 [2024-07-12 11:07:50.425751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.486 [2024-07-12 11:07:50.425759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.487 [2024-07-12 11:07:50.425765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.487 [2024-07-12 11:07:50.425779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-07-12 11:07:50.435722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.487 [2024-07-12 11:07:50.435802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.487 [2024-07-12 11:07:50.435831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.487 [2024-07-12 11:07:50.435840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.487 [2024-07-12 11:07:50.435846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.487 [2024-07-12 11:07:50.435866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-07-12 11:07:50.445706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.487 [2024-07-12 11:07:50.445778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.487 [2024-07-12 11:07:50.445803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.487 [2024-07-12 11:07:50.445812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.487 [2024-07-12 11:07:50.445818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.487 [2024-07-12 11:07:50.445838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-07-12 11:07:50.455763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.487 [2024-07-12 11:07:50.455839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.487 [2024-07-12 11:07:50.455864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.487 [2024-07-12 11:07:50.455873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.487 [2024-07-12 11:07:50.455879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.487 [2024-07-12 11:07:50.455898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-07-12 11:07:50.465762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.487 [2024-07-12 11:07:50.465827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.487 [2024-07-12 11:07:50.465844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.487 [2024-07-12 11:07:50.465852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.487 [2024-07-12 11:07:50.465858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.487 [2024-07-12 11:07:50.465873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.749 [2024-07-12 11:07:50.475789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.749 [2024-07-12 11:07:50.475852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.749 [2024-07-12 11:07:50.475869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.749 [2024-07-12 11:07:50.475876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.749 [2024-07-12 11:07:50.475882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.749 [2024-07-12 11:07:50.475902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.749 qpair failed and we were unable to recover it. 00:29:33.749 [2024-07-12 11:07:50.485833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.750 [2024-07-12 11:07:50.485927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.750 [2024-07-12 11:07:50.485943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.750 [2024-07-12 11:07:50.485950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.750 [2024-07-12 11:07:50.485956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.750 [2024-07-12 11:07:50.485971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.750 qpair failed and we were unable to recover it. 00:29:33.750 [2024-07-12 11:07:50.495793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.750 [2024-07-12 11:07:50.495868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.750 [2024-07-12 11:07:50.495888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.750 [2024-07-12 11:07:50.495896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.750 [2024-07-12 11:07:50.495902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.750 [2024-07-12 11:07:50.495918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.750 qpair failed and we were unable to recover it. 00:29:33.750 [2024-07-12 11:07:50.505829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.750 [2024-07-12 11:07:50.505917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.750 [2024-07-12 11:07:50.505933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.750 [2024-07-12 11:07:50.505940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.750 [2024-07-12 11:07:50.505946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.750 [2024-07-12 11:07:50.505961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.750 qpair failed and we were unable to recover it. 00:29:33.750 [2024-07-12 11:07:50.515912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.750 [2024-07-12 11:07:50.515974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.750 [2024-07-12 11:07:50.515990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.750 [2024-07-12 11:07:50.515997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.750 [2024-07-12 11:07:50.516003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.750 [2024-07-12 11:07:50.516017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.750 qpair failed and we were unable to recover it. 00:29:33.750 [2024-07-12 11:07:50.525951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.750 [2024-07-12 11:07:50.526038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.750 [2024-07-12 11:07:50.526058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.750 [2024-07-12 11:07:50.526066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.750 [2024-07-12 11:07:50.526072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.750 [2024-07-12 11:07:50.526086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.750 qpair failed and we were unable to recover it. 00:29:33.750 [2024-07-12 11:07:50.535939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.750 [2024-07-12 11:07:50.536008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.750 [2024-07-12 11:07:50.536023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.750 [2024-07-12 11:07:50.536031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.750 [2024-07-12 11:07:50.536037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.750 [2024-07-12 11:07:50.536051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.750 qpair failed and we were unable to recover it. 00:29:33.750 [2024-07-12 11:07:50.546008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.750 [2024-07-12 11:07:50.546083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.750 [2024-07-12 11:07:50.546099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.750 [2024-07-12 11:07:50.546106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.750 [2024-07-12 11:07:50.546112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.750 [2024-07-12 11:07:50.546133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.750 qpair failed and we were unable to recover it. 00:29:33.750 [2024-07-12 11:07:50.556064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.750 [2024-07-12 11:07:50.556149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.750 [2024-07-12 11:07:50.556165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.750 [2024-07-12 11:07:50.556172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.750 [2024-07-12 11:07:50.556178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.750 [2024-07-12 11:07:50.556193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.750 qpair failed and we were unable to recover it. 00:29:33.750 [2024-07-12 11:07:50.566061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.750 [2024-07-12 11:07:50.566159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.750 [2024-07-12 11:07:50.566175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.750 [2024-07-12 11:07:50.566182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.750 [2024-07-12 11:07:50.566192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.750 [2024-07-12 11:07:50.566207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.750 qpair failed and we were unable to recover it. 00:29:33.750 [2024-07-12 11:07:50.576052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.750 [2024-07-12 11:07:50.576119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.750 [2024-07-12 11:07:50.576140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.750 [2024-07-12 11:07:50.576147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.750 [2024-07-12 11:07:50.576153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.750 [2024-07-12 11:07:50.576168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.750 qpair failed and we were unable to recover it. 00:29:33.750 [2024-07-12 11:07:50.586085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.750 [2024-07-12 11:07:50.586164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.750 [2024-07-12 11:07:50.586180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.750 [2024-07-12 11:07:50.586187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.750 [2024-07-12 11:07:50.586193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.750 [2024-07-12 11:07:50.586207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.750 qpair failed and we were unable to recover it. 00:29:33.750 [2024-07-12 11:07:50.596145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.750 [2024-07-12 11:07:50.596212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.750 [2024-07-12 11:07:50.596227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.750 [2024-07-12 11:07:50.596235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.750 [2024-07-12 11:07:50.596241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.750 [2024-07-12 11:07:50.596255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.750 qpair failed and we were unable to recover it. 00:29:33.750 [2024-07-12 11:07:50.606169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.750 [2024-07-12 11:07:50.606233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.750 [2024-07-12 11:07:50.606250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.750 [2024-07-12 11:07:50.606257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.750 [2024-07-12 11:07:50.606263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.750 [2024-07-12 11:07:50.606277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.750 qpair failed and we were unable to recover it. 00:29:33.750 [2024-07-12 11:07:50.616208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.750 [2024-07-12 11:07:50.616280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.750 [2024-07-12 11:07:50.616296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.750 [2024-07-12 11:07:50.616303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.750 [2024-07-12 11:07:50.616309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.750 [2024-07-12 11:07:50.616324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.750 qpair failed and we were unable to recover it. 00:29:33.750 [2024-07-12 11:07:50.626192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.750 [2024-07-12 11:07:50.626263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.751 [2024-07-12 11:07:50.626279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.751 [2024-07-12 11:07:50.626286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.751 [2024-07-12 11:07:50.626292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.751 [2024-07-12 11:07:50.626307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.751 qpair failed and we were unable to recover it. 00:29:33.751 [2024-07-12 11:07:50.636270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.751 [2024-07-12 11:07:50.636334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.751 [2024-07-12 11:07:50.636350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.751 [2024-07-12 11:07:50.636357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.751 [2024-07-12 11:07:50.636363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.751 [2024-07-12 11:07:50.636377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.751 qpair failed and we were unable to recover it. 00:29:33.751 [2024-07-12 11:07:50.646264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.751 [2024-07-12 11:07:50.646327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.751 [2024-07-12 11:07:50.646343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.751 [2024-07-12 11:07:50.646350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.751 [2024-07-12 11:07:50.646356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.751 [2024-07-12 11:07:50.646371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.751 qpair failed and we were unable to recover it. 00:29:33.751 [2024-07-12 11:07:50.656290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.751 [2024-07-12 11:07:50.656360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.751 [2024-07-12 11:07:50.656375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.751 [2024-07-12 11:07:50.656382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.751 [2024-07-12 11:07:50.656392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.751 [2024-07-12 11:07:50.656406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.751 qpair failed and we were unable to recover it. 00:29:33.751 [2024-07-12 11:07:50.666307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.751 [2024-07-12 11:07:50.666370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.751 [2024-07-12 11:07:50.666386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.751 [2024-07-12 11:07:50.666393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.751 [2024-07-12 11:07:50.666399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.751 [2024-07-12 11:07:50.666414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.751 qpair failed and we were unable to recover it. 00:29:33.751 [2024-07-12 11:07:50.676358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.751 [2024-07-12 11:07:50.676462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.751 [2024-07-12 11:07:50.676477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.751 [2024-07-12 11:07:50.676484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.751 [2024-07-12 11:07:50.676490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.751 [2024-07-12 11:07:50.676505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.751 qpair failed and we were unable to recover it. 00:29:33.751 [2024-07-12 11:07:50.686483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.751 [2024-07-12 11:07:50.686550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.751 [2024-07-12 11:07:50.686566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.751 [2024-07-12 11:07:50.686573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.751 [2024-07-12 11:07:50.686579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.751 [2024-07-12 11:07:50.686593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.751 qpair failed and we were unable to recover it. 00:29:33.751 [2024-07-12 11:07:50.696305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.751 [2024-07-12 11:07:50.696370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.751 [2024-07-12 11:07:50.696385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.751 [2024-07-12 11:07:50.696392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.751 [2024-07-12 11:07:50.696398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.751 [2024-07-12 11:07:50.696413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.751 qpair failed and we were unable to recover it. 00:29:33.751 [2024-07-12 11:07:50.706412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.751 [2024-07-12 11:07:50.706477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.751 [2024-07-12 11:07:50.706493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.751 [2024-07-12 11:07:50.706500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.751 [2024-07-12 11:07:50.706506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.751 [2024-07-12 11:07:50.706521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.751 qpair failed and we were unable to recover it. 00:29:33.751 [2024-07-12 11:07:50.716453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.751 [2024-07-12 11:07:50.716521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.751 [2024-07-12 11:07:50.716539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.751 [2024-07-12 11:07:50.716546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.751 [2024-07-12 11:07:50.716552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.751 [2024-07-12 11:07:50.716567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.751 qpair failed and we were unable to recover it. 00:29:33.751 [2024-07-12 11:07:50.726484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.751 [2024-07-12 11:07:50.726550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.751 [2024-07-12 11:07:50.726566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.751 [2024-07-12 11:07:50.726573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.751 [2024-07-12 11:07:50.726579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:33.751 [2024-07-12 11:07:50.726593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:33.751 qpair failed and we were unable to recover it. 00:29:34.014 [2024-07-12 11:07:50.736488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.014 [2024-07-12 11:07:50.736555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.014 [2024-07-12 11:07:50.736572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.014 [2024-07-12 11:07:50.736579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.014 [2024-07-12 11:07:50.736585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:34.014 [2024-07-12 11:07:50.736599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.014 qpair failed and we were unable to recover it. 00:29:34.014 [2024-07-12 11:07:50.746548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.014 [2024-07-12 11:07:50.746609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.014 [2024-07-12 11:07:50.746624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.014 [2024-07-12 11:07:50.746635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.014 [2024-07-12 11:07:50.746641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:34.014 [2024-07-12 11:07:50.746656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.014 qpair failed and we were unable to recover it. 00:29:34.014 [2024-07-12 11:07:50.756568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.014 [2024-07-12 11:07:50.756630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.014 [2024-07-12 11:07:50.756646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.014 [2024-07-12 11:07:50.756653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.014 [2024-07-12 11:07:50.756659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:34.014 [2024-07-12 11:07:50.756674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.014 qpair failed and we were unable to recover it. 00:29:34.014 [2024-07-12 11:07:50.766578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.014 [2024-07-12 11:07:50.766642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.014 [2024-07-12 11:07:50.766658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.014 [2024-07-12 11:07:50.766665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.014 [2024-07-12 11:07:50.766671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:34.014 [2024-07-12 11:07:50.766685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.014 qpair failed and we were unable to recover it. 00:29:34.014 [2024-07-12 11:07:50.776616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.014 [2024-07-12 11:07:50.776684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.014 [2024-07-12 11:07:50.776700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.014 [2024-07-12 11:07:50.776707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.014 [2024-07-12 11:07:50.776713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:34.014 [2024-07-12 11:07:50.776728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.014 qpair failed and we were unable to recover it. 00:29:34.014 [2024-07-12 11:07:50.786596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.014 [2024-07-12 11:07:50.786660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.014 [2024-07-12 11:07:50.786676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.014 [2024-07-12 11:07:50.786683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.014 [2024-07-12 11:07:50.786689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:34.014 [2024-07-12 11:07:50.786703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.014 qpair failed and we were unable to recover it. 00:29:34.014 [2024-07-12 11:07:50.796648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.014 [2024-07-12 11:07:50.796720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.014 [2024-07-12 11:07:50.796745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.014 [2024-07-12 11:07:50.796753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.014 [2024-07-12 11:07:50.796760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:34.014 [2024-07-12 11:07:50.796780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.014 qpair failed and we were unable to recover it. 00:29:34.014 [2024-07-12 11:07:50.806677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.014 [2024-07-12 11:07:50.806743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.014 [2024-07-12 11:07:50.806762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.014 [2024-07-12 11:07:50.806770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.014 [2024-07-12 11:07:50.806776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:34.014 [2024-07-12 11:07:50.806792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.014 qpair failed and we were unable to recover it. 00:29:34.014 [2024-07-12 11:07:50.816741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.014 [2024-07-12 11:07:50.816816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.014 [2024-07-12 11:07:50.816841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.014 [2024-07-12 11:07:50.816850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.014 [2024-07-12 11:07:50.816856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:34.014 [2024-07-12 11:07:50.816876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.014 qpair failed and we were unable to recover it. 00:29:34.014 [2024-07-12 11:07:50.826743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.015 [2024-07-12 11:07:50.826808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.015 [2024-07-12 11:07:50.826825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.015 [2024-07-12 11:07:50.826832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.015 [2024-07-12 11:07:50.826838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:34.015 [2024-07-12 11:07:50.826853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.015 qpair failed and we were unable to recover it. 00:29:34.015 [2024-07-12 11:07:50.836746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.015 [2024-07-12 11:07:50.836814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.015 [2024-07-12 11:07:50.836843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.015 [2024-07-12 11:07:50.836852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.015 [2024-07-12 11:07:50.836859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:34.015 [2024-07-12 11:07:50.836878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.015 qpair failed and we were unable to recover it. 00:29:34.015 [2024-07-12 11:07:50.846868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.015 [2024-07-12 11:07:50.846942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.015 [2024-07-12 11:07:50.846959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.015 [2024-07-12 11:07:50.846966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.015 [2024-07-12 11:07:50.846973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:34.015 [2024-07-12 11:07:50.846988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.015 qpair failed and we were unable to recover it. 00:29:34.015 [2024-07-12 11:07:50.856843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.015 [2024-07-12 11:07:50.856910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.015 [2024-07-12 11:07:50.856927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.015 [2024-07-12 11:07:50.856934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.015 [2024-07-12 11:07:50.856940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:34.015 [2024-07-12 11:07:50.856955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.015 qpair failed and we were unable to recover it. 00:29:34.015 [2024-07-12 11:07:50.866855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.015 [2024-07-12 11:07:50.866923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.015 [2024-07-12 11:07:50.866939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.015 [2024-07-12 11:07:50.866946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.015 [2024-07-12 11:07:50.866952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:34.015 [2024-07-12 11:07:50.866967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.015 qpair failed and we were unable to recover it. 00:29:34.015 [2024-07-12 11:07:50.876858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.015 [2024-07-12 11:07:50.876969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.015 [2024-07-12 11:07:50.876985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.015 [2024-07-12 11:07:50.876992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.015 [2024-07-12 11:07:50.876998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:34.015 [2024-07-12 11:07:50.877019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.015 qpair failed and we were unable to recover it. 00:29:34.015 [2024-07-12 11:07:50.886902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.015 [2024-07-12 11:07:50.886970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.015 [2024-07-12 11:07:50.886986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.015 [2024-07-12 11:07:50.886993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.015 [2024-07-12 11:07:50.886999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b90000b90 00:29:34.015 [2024-07-12 11:07:50.887013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.015 qpair failed and we were unable to recover it. 00:29:34.015 Read completed with error (sct=0, sc=8) 00:29:34.015 starting I/O failed 00:29:34.015 Read completed with error (sct=0, sc=8) 00:29:34.015 starting I/O failed 00:29:34.015 Read completed with error (sct=0, sc=8) 00:29:34.015 starting I/O failed 00:29:34.015 Read completed with error (sct=0, sc=8) 00:29:34.015 starting I/O failed 00:29:34.015 Read completed with error (sct=0, sc=8) 00:29:34.015 starting I/O failed 00:29:34.015 Read completed with error (sct=0, sc=8) 00:29:34.015 starting I/O failed 00:29:34.015 Read completed with error (sct=0, sc=8) 00:29:34.015 starting I/O failed 00:29:34.015 Read completed with error (sct=0, sc=8) 00:29:34.015 starting I/O failed 00:29:34.015 Read completed with error (sct=0, sc=8) 00:29:34.015 starting I/O failed 00:29:34.015 Read completed with error (sct=0, sc=8) 00:29:34.015 starting I/O failed 00:29:34.015 Read completed with error (sct=0, sc=8) 00:29:34.015 starting I/O failed 00:29:34.015 Read completed with error (sct=0, sc=8) 00:29:34.015 starting I/O failed 00:29:34.015 Write completed with error (sct=0, sc=8) 00:29:34.015 starting I/O failed 00:29:34.015 Read completed with error (sct=0, sc=8) 00:29:34.015 starting I/O failed 00:29:34.015 Write completed with error (sct=0, sc=8) 00:29:34.015 starting I/O failed 00:29:34.015 Read completed with error (sct=0, sc=8) 00:29:34.015 starting I/O failed 00:29:34.015 Read completed with error (sct=0, sc=8) 00:29:34.015 starting I/O failed 00:29:34.015 Read completed with error (sct=0, sc=8) 00:29:34.015 starting I/O failed 00:29:34.015 Read completed with error (sct=0, sc=8) 00:29:34.015 starting I/O failed 00:29:34.015 Write completed with error (sct=0, sc=8) 00:29:34.015 starting I/O failed 00:29:34.015 Write completed with error (sct=0, sc=8) 00:29:34.015 starting I/O failed 00:29:34.015 Write completed with error (sct=0, sc=8) 00:29:34.015 starting I/O failed 00:29:34.015 Write completed with error (sct=0, sc=8) 00:29:34.015 starting I/O failed 00:29:34.015 Write completed with error (sct=0, sc=8) 00:29:34.015 starting I/O failed 00:29:34.015 Write completed with error (sct=0, sc=8) 00:29:34.015 starting I/O failed 00:29:34.015 Write completed with error (sct=0, sc=8) 00:29:34.015 starting I/O failed 00:29:34.015 Write completed with error (sct=0, sc=8) 00:29:34.015 starting I/O failed 00:29:34.015 Read completed with error (sct=0, sc=8) 00:29:34.015 starting I/O failed 00:29:34.015 Read completed with error (sct=0, sc=8) 00:29:34.015 starting I/O failed 00:29:34.015 Read completed with error (sct=0, sc=8) 00:29:34.015 starting I/O failed 00:29:34.015 Write completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Read completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 [2024-07-12 11:07:50.887908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.016 [2024-07-12 11:07:50.896986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.016 [2024-07-12 11:07:50.897151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.016 [2024-07-12 11:07:50.897217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.016 [2024-07-12 11:07:50.897242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.016 [2024-07-12 11:07:50.897262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x82c220 00:29:34.016 [2024-07-12 11:07:50.897313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.016 qpair failed and we were unable to recover it. 00:29:34.016 [2024-07-12 11:07:50.906995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.016 [2024-07-12 11:07:50.907106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.016 [2024-07-12 11:07:50.907146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.016 [2024-07-12 11:07:50.907162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.016 [2024-07-12 11:07:50.907177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x82c220 00:29:34.016 [2024-07-12 11:07:50.907207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.016 qpair failed and we were unable to recover it. 00:29:34.016 Read completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Read completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Read completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Read completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Read completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Read completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Read completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Read completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Read completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Read completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Write completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Write completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Write completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Write completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Read completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Read completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Write completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Write completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Write completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Read completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Read completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Read completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Write completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Read completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Write completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Write completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Read completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Read completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Write completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Read completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Read completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Read completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 [2024-07-12 11:07:50.908160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.016 Read completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Read completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Read completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Read completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Read completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Write completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Read completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Write completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Write completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Read completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Write completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Write completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Write completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Read completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Write completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Write completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Read completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Write completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Read completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Write completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Write completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Write completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Read completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Write completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Write completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Read completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Write completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Write completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Write completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Write completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Read completed with error (sct=0, sc=8) 00:29:34.016 starting I/O failed 00:29:34.016 Read completed with error (sct=0, sc=8) 00:29:34.017 starting I/O failed 00:29:34.017 [2024-07-12 11:07:50.908921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.017 [2024-07-12 11:07:50.917030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.017 [2024-07-12 11:07:50.917199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.017 [2024-07-12 11:07:50.917251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.017 [2024-07-12 11:07:50.917274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.017 [2024-07-12 11:07:50.917294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b98000b90 00:29:34.017 [2024-07-12 11:07:50.917340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.017 qpair failed and we were unable to recover it. 00:29:34.017 [2024-07-12 11:07:50.927035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.017 [2024-07-12 11:07:50.927153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.017 [2024-07-12 11:07:50.927185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.017 [2024-07-12 11:07:50.927201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.017 [2024-07-12 11:07:50.927214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b98000b90 00:29:34.017 [2024-07-12 11:07:50.927245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.017 qpair failed and we were unable to recover it. 00:29:34.017 [2024-07-12 11:07:50.927630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839f20 is same with the state(5) to be set 00:29:34.017 [2024-07-12 11:07:50.937032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.017 [2024-07-12 11:07:50.937209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.017 [2024-07-12 11:07:50.937274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.017 [2024-07-12 11:07:50.937298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.017 [2024-07-12 11:07:50.937318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:34.017 [2024-07-12 11:07:50.937371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.017 qpair failed and we were unable to recover it. 00:29:34.017 [2024-07-12 11:07:50.947125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.017 [2024-07-12 11:07:50.947235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.017 [2024-07-12 11:07:50.947275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.017 [2024-07-12 11:07:50.947292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.017 [2024-07-12 11:07:50.947306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:34.017 [2024-07-12 11:07:50.947338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.017 qpair failed and we were unable to recover it. 00:29:34.017 [2024-07-12 11:07:50.947760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x839f20 (9): Bad file descriptor 00:29:34.017 Initializing NVMe Controllers 00:29:34.017 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:34.017 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:34.017 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:34.017 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:34.017 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:34.017 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:34.017 Initialization complete. Launching workers. 00:29:34.017 Starting thread on core 1 00:29:34.017 Starting thread on core 2 00:29:34.017 Starting thread on core 3 00:29:34.017 Starting thread on core 0 00:29:34.017 11:07:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:34.017 00:29:34.017 real 0m11.314s 00:29:34.017 user 0m21.230s 00:29:34.017 sys 0m4.173s 00:29:34.017 11:07:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:34.017 11:07:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.017 ************************************ 00:29:34.017 END TEST nvmf_target_disconnect_tc2 00:29:34.017 ************************************ 00:29:34.278 11:07:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:29:34.278 11:07:51 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:34.278 11:07:51 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:34.278 11:07:51 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:34.278 11:07:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:34.278 11:07:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:29:34.278 11:07:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:34.278 11:07:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:29:34.278 11:07:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:34.278 11:07:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:34.278 rmmod nvme_tcp 00:29:34.278 rmmod nvme_fabrics 00:29:34.278 rmmod nvme_keyring 00:29:34.278 11:07:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:34.278 11:07:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:29:34.278 11:07:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:29:34.278 11:07:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2285488 ']' 00:29:34.278 11:07:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2285488 00:29:34.278 11:07:51 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2285488 ']' 00:29:34.278 11:07:51 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 2285488 00:29:34.278 11:07:51 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:29:34.278 11:07:51 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:34.278 11:07:51 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2285488 00:29:34.278 11:07:51 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:29:34.278 11:07:51 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:29:34.278 11:07:51 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2285488' 00:29:34.278 killing process with pid 2285488 00:29:34.278 11:07:51 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 2285488 00:29:34.278 11:07:51 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 2285488 00:29:34.278 11:07:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:34.278 11:07:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:34.278 11:07:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:34.278 11:07:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:34.278 11:07:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:34.278 11:07:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:34.278 11:07:51 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:34.278 11:07:51 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:36.825 11:07:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:36.825 00:29:36.825 real 0m21.371s 00:29:36.825 user 0m48.482s 00:29:36.825 sys 0m10.113s 00:29:36.825 11:07:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:36.825 11:07:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:36.825 ************************************ 00:29:36.825 END TEST nvmf_target_disconnect 00:29:36.825 ************************************ 00:29:36.825 11:07:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:36.825 11:07:53 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:29:36.825 11:07:53 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:36.825 11:07:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:36.825 11:07:53 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:29:36.825 00:29:36.825 real 22m46.902s 00:29:36.825 user 47m18.114s 00:29:36.825 sys 7m18.732s 00:29:36.825 11:07:53 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:36.825 11:07:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:36.825 ************************************ 00:29:36.825 END TEST nvmf_tcp 00:29:36.825 ************************************ 00:29:36.825 11:07:53 -- common/autotest_common.sh@1142 -- # return 0 00:29:36.825 11:07:53 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:29:36.825 11:07:53 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:36.825 11:07:53 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:36.825 11:07:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:36.825 11:07:53 -- common/autotest_common.sh@10 -- # set +x 00:29:36.825 ************************************ 00:29:36.825 START TEST spdkcli_nvmf_tcp 00:29:36.825 ************************************ 00:29:36.825 11:07:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:36.825 * Looking for test storage... 00:29:36.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:29:36.825 11:07:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:29:36.825 11:07:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:36.825 11:07:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:29:36.825 11:07:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:36.825 11:07:53 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:29:36.825 11:07:53 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:36.825 11:07:53 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:36.825 11:07:53 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:36.825 11:07:53 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:36.825 11:07:53 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:36.825 11:07:53 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:36.825 11:07:53 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:36.825 11:07:53 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:36.825 11:07:53 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:36.825 11:07:53 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:36.825 11:07:53 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2287313 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2287313 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 2287313 ']' 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:36.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:36.826 11:07:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:36.826 [2024-07-12 11:07:53.690568] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:36.826 [2024-07-12 11:07:53.690641] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2287313 ] 00:29:36.826 EAL: No free 2048 kB hugepages reported on node 1 00:29:36.826 [2024-07-12 11:07:53.772339] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:37.088 [2024-07-12 11:07:53.867433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.088 [2024-07-12 11:07:53.867533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.661 11:07:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:37.661 11:07:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:29:37.661 11:07:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:37.661 11:07:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:37.661 11:07:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:37.661 11:07:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:37.661 11:07:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:37.661 11:07:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:37.661 11:07:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:37.661 11:07:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:37.661 11:07:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:37.661 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:37.661 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:37.661 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:37.661 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:37.661 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:37.661 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:37.661 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:37.661 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:37.661 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:37.661 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:37.661 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:37.661 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:37.661 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:37.661 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:37.661 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:37.661 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:37.661 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:37.661 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:37.661 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:37.661 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:37.661 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:37.661 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:37.661 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:37.661 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:37.661 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:37.661 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:37.661 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:37.661 ' 00:29:40.209 [2024-07-12 11:07:57.135162] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:41.594 [2024-07-12 11:07:58.431198] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:44.138 [2024-07-12 11:08:00.858073] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:46.051 [2024-07-12 11:08:02.947975] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:47.964 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:47.964 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:47.964 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:47.964 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:47.964 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:47.964 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:47.964 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:47.964 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:47.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:47.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:47.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:47.964 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:47.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:47.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:47.964 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:47.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:47.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:47.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:47.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:47.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:47.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:47.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:47.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:47.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:47.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:47.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:47.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:47.964 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:47.964 11:08:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:47.964 11:08:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:47.964 11:08:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:47.964 11:08:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:47.964 11:08:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:47.964 11:08:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:47.964 11:08:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:29:47.964 11:08:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:29:48.277 11:08:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:48.277 11:08:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:48.277 11:08:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:48.277 11:08:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:48.277 11:08:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:48.277 11:08:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:48.277 11:08:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:48.277 11:08:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:48.277 11:08:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:48.277 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:48.277 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:48.277 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:48.277 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:48.277 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:48.277 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:48.277 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:48.277 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:48.277 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:48.277 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:48.277 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:48.277 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:48.277 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:48.277 ' 00:29:53.577 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:53.577 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:53.577 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:53.577 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:53.577 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:29:53.577 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:29:53.577 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:53.577 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:53.577 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:53.577 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:53.577 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:53.577 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:53.577 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:53.577 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:53.577 11:08:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:53.577 11:08:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:53.577 11:08:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:53.837 11:08:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2287313 00:29:53.837 11:08:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2287313 ']' 00:29:53.837 11:08:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2287313 00:29:53.837 11:08:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:29:53.837 11:08:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:53.837 11:08:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2287313 00:29:53.837 11:08:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:53.837 11:08:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:53.837 11:08:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2287313' 00:29:53.837 killing process with pid 2287313 00:29:53.837 11:08:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 2287313 00:29:53.837 11:08:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 2287313 00:29:53.837 11:08:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:29:53.837 11:08:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:53.837 11:08:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2287313 ']' 00:29:53.837 11:08:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2287313 00:29:53.837 11:08:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2287313 ']' 00:29:53.837 11:08:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2287313 00:29:53.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2287313) - No such process 00:29:53.837 11:08:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 2287313 is not found' 00:29:53.837 Process with pid 2287313 is not found 00:29:53.837 11:08:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:53.837 11:08:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:53.837 11:08:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:53.837 00:29:53.837 real 0m17.254s 00:29:53.837 user 0m37.765s 00:29:53.837 sys 0m0.986s 00:29:53.837 11:08:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:53.837 11:08:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:53.837 ************************************ 00:29:53.837 END TEST spdkcli_nvmf_tcp 00:29:53.837 ************************************ 00:29:53.837 11:08:10 -- common/autotest_common.sh@1142 -- # return 0 00:29:53.837 11:08:10 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:53.837 11:08:10 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:53.837 11:08:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:53.837 11:08:10 -- common/autotest_common.sh@10 -- # set +x 00:29:53.837 ************************************ 00:29:53.837 START TEST nvmf_identify_passthru 00:29:53.837 ************************************ 00:29:53.837 11:08:10 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:54.098 * Looking for test storage... 00:29:54.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:54.098 11:08:10 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:54.098 11:08:10 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:29:54.098 11:08:10 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:54.098 11:08:10 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:54.098 11:08:10 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:54.098 11:08:10 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:54.098 11:08:10 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:54.098 11:08:10 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:54.098 11:08:10 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:54.098 11:08:10 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:54.098 11:08:10 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:54.098 11:08:10 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:54.098 11:08:10 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:54.098 11:08:10 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:54.098 11:08:10 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:54.098 11:08:10 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:54.098 11:08:10 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:54.098 11:08:10 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:54.098 11:08:10 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:54.098 11:08:10 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:54.098 11:08:10 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:54.098 11:08:10 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:54.098 11:08:10 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.098 11:08:10 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.098 11:08:10 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.099 11:08:10 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:54.099 11:08:10 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.099 11:08:10 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:29:54.099 11:08:10 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:54.099 11:08:10 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:54.099 11:08:10 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:54.099 11:08:10 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:54.099 11:08:10 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:54.099 11:08:10 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:54.099 11:08:10 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:54.099 11:08:10 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:54.099 11:08:10 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:54.099 11:08:10 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:54.099 11:08:10 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:54.099 11:08:10 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:54.099 11:08:10 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.099 11:08:10 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.099 11:08:10 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.099 11:08:10 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:54.099 11:08:10 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.099 11:08:10 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:29:54.099 11:08:10 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:54.099 11:08:10 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:54.099 11:08:10 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:54.099 11:08:10 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:54.099 11:08:10 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:54.099 11:08:10 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.099 11:08:10 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:54.099 11:08:10 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:54.099 11:08:10 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:54.099 11:08:10 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:54.099 11:08:10 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:29:54.099 11:08:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:02.243 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:02.243 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:30:02.243 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:02.243 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:02.243 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:02.243 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:02.243 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:02.243 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:30:02.243 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:02.243 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:30:02.243 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:30:02.243 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:30:02.243 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:30:02.243 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:30:02.243 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:30:02.243 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:02.243 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:02.243 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:02.243 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:02.243 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:02.243 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:02.243 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:02.244 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:02.244 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:02.244 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:02.244 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:02.244 11:08:17 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:02.244 11:08:18 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:02.244 11:08:18 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:02.244 11:08:18 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:02.244 11:08:18 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:02.244 11:08:18 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:02.244 11:08:18 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:02.244 11:08:18 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:02.244 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:02.244 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:30:02.244 00:30:02.244 --- 10.0.0.2 ping statistics --- 00:30:02.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:02.244 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:30:02.244 11:08:18 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:02.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:02.244 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:30:02.244 00:30:02.244 --- 10.0.0.1 ping statistics --- 00:30:02.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:02.244 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:30:02.244 11:08:18 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:02.244 11:08:18 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:30:02.244 11:08:18 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:02.244 11:08:18 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:02.244 11:08:18 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:02.244 11:08:18 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:02.244 11:08:18 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:02.244 11:08:18 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:02.244 11:08:18 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:02.244 11:08:18 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:02.244 11:08:18 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:02.244 11:08:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:02.244 11:08:18 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:02.244 11:08:18 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:30:02.244 11:08:18 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:30:02.244 11:08:18 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:30:02.244 11:08:18 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:30:02.244 11:08:18 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:30:02.244 11:08:18 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:30:02.244 11:08:18 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:02.244 11:08:18 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:02.244 11:08:18 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:30:02.244 11:08:18 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:30:02.244 11:08:18 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:30:02.244 11:08:18 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:65:00.0 00:30:02.244 11:08:18 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:30:02.244 11:08:18 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:30:02.244 11:08:18 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:02.244 11:08:18 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:02.244 11:08:18 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:02.244 EAL: No free 2048 kB hugepages reported on node 1 00:30:02.244 11:08:18 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:30:02.244 11:08:18 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:02.244 11:08:18 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:02.244 11:08:18 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:02.244 EAL: No free 2048 kB hugepages reported on node 1 00:30:02.506 11:08:19 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:30:02.506 11:08:19 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:02.506 11:08:19 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:02.506 11:08:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:02.506 11:08:19 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:02.506 11:08:19 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:02.506 11:08:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:02.506 11:08:19 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2294503 00:30:02.506 11:08:19 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:02.506 11:08:19 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:02.506 11:08:19 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2294503 00:30:02.506 11:08:19 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 2294503 ']' 00:30:02.506 11:08:19 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:02.506 11:08:19 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:02.506 11:08:19 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:02.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:02.506 11:08:19 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:02.506 11:08:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:02.506 [2024-07-12 11:08:19.455339] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:02.506 [2024-07-12 11:08:19.455405] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:02.767 EAL: No free 2048 kB hugepages reported on node 1 00:30:02.767 [2024-07-12 11:08:19.540808] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:02.767 [2024-07-12 11:08:19.636958] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:02.767 [2024-07-12 11:08:19.637017] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:02.767 [2024-07-12 11:08:19.637026] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:02.767 [2024-07-12 11:08:19.637033] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:02.767 [2024-07-12 11:08:19.637046] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:02.767 [2024-07-12 11:08:19.637201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:02.767 [2024-07-12 11:08:19.637303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:02.767 [2024-07-12 11:08:19.637466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:02.767 [2024-07-12 11:08:19.637468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:03.343 11:08:20 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:03.343 11:08:20 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:30:03.343 11:08:20 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:03.343 11:08:20 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.343 11:08:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:03.343 INFO: Log level set to 20 00:30:03.343 INFO: Requests: 00:30:03.343 { 00:30:03.343 "jsonrpc": "2.0", 00:30:03.343 "method": "nvmf_set_config", 00:30:03.343 "id": 1, 00:30:03.343 "params": { 00:30:03.343 "admin_cmd_passthru": { 00:30:03.343 "identify_ctrlr": true 00:30:03.343 } 00:30:03.343 } 00:30:03.343 } 00:30:03.343 00:30:03.343 INFO: response: 00:30:03.343 { 00:30:03.343 "jsonrpc": "2.0", 00:30:03.343 "id": 1, 00:30:03.343 "result": true 00:30:03.343 } 00:30:03.343 00:30:03.343 11:08:20 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.343 11:08:20 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:03.343 11:08:20 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.343 11:08:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:03.343 INFO: Setting log level to 20 00:30:03.343 INFO: Setting log level to 20 00:30:03.343 INFO: Log level set to 20 00:30:03.343 INFO: Log level set to 20 00:30:03.343 INFO: Requests: 00:30:03.343 { 00:30:03.343 "jsonrpc": "2.0", 00:30:03.343 "method": "framework_start_init", 00:30:03.343 "id": 1 00:30:03.343 } 00:30:03.343 00:30:03.343 INFO: Requests: 00:30:03.343 { 00:30:03.343 "jsonrpc": "2.0", 00:30:03.343 "method": "framework_start_init", 00:30:03.343 "id": 1 00:30:03.343 } 00:30:03.343 00:30:03.606 [2024-07-12 11:08:20.351120] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:03.606 INFO: response: 00:30:03.606 { 00:30:03.606 "jsonrpc": "2.0", 00:30:03.606 "id": 1, 00:30:03.606 "result": true 00:30:03.606 } 00:30:03.606 00:30:03.606 INFO: response: 00:30:03.606 { 00:30:03.606 "jsonrpc": "2.0", 00:30:03.606 "id": 1, 00:30:03.606 "result": true 00:30:03.606 } 00:30:03.606 00:30:03.606 11:08:20 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.606 11:08:20 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:03.606 11:08:20 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.606 11:08:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:03.606 INFO: Setting log level to 40 00:30:03.606 INFO: Setting log level to 40 00:30:03.606 INFO: Setting log level to 40 00:30:03.606 [2024-07-12 11:08:20.364666] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:03.606 11:08:20 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.606 11:08:20 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:03.606 11:08:20 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:03.606 11:08:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:03.606 11:08:20 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:30:03.606 11:08:20 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.606 11:08:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:03.868 Nvme0n1 00:30:03.868 11:08:20 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.868 11:08:20 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:03.868 11:08:20 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.868 11:08:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:03.868 11:08:20 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.868 11:08:20 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:03.868 11:08:20 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.868 11:08:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:03.868 11:08:20 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.868 11:08:20 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:03.868 11:08:20 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.868 11:08:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:03.868 [2024-07-12 11:08:20.759110] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:03.868 11:08:20 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.868 11:08:20 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:03.868 11:08:20 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.868 11:08:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:03.868 [ 00:30:03.868 { 00:30:03.868 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:03.868 "subtype": "Discovery", 00:30:03.868 "listen_addresses": [], 00:30:03.868 "allow_any_host": true, 00:30:03.868 "hosts": [] 00:30:03.868 }, 00:30:03.868 { 00:30:03.868 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:03.868 "subtype": "NVMe", 00:30:03.868 "listen_addresses": [ 00:30:03.868 { 00:30:03.868 "trtype": "TCP", 00:30:03.868 "adrfam": "IPv4", 00:30:03.868 "traddr": "10.0.0.2", 00:30:03.868 "trsvcid": "4420" 00:30:03.868 } 00:30:03.868 ], 00:30:03.868 "allow_any_host": true, 00:30:03.868 "hosts": [], 00:30:03.868 "serial_number": "SPDK00000000000001", 00:30:03.868 "model_number": "SPDK bdev Controller", 00:30:03.868 "max_namespaces": 1, 00:30:03.868 "min_cntlid": 1, 00:30:03.868 "max_cntlid": 65519, 00:30:03.868 "namespaces": [ 00:30:03.868 { 00:30:03.868 "nsid": 1, 00:30:03.868 "bdev_name": "Nvme0n1", 00:30:03.868 "name": "Nvme0n1", 00:30:03.868 "nguid": "36344730526054870025384500000044", 00:30:03.868 "uuid": "36344730-5260-5487-0025-384500000044" 00:30:03.868 } 00:30:03.868 ] 00:30:03.868 } 00:30:03.868 ] 00:30:03.868 11:08:20 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.868 11:08:20 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:03.868 11:08:20 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:03.868 11:08:20 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:03.868 EAL: No free 2048 kB hugepages reported on node 1 00:30:04.131 11:08:20 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:30:04.131 11:08:20 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:04.131 11:08:20 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:04.131 11:08:20 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:04.131 EAL: No free 2048 kB hugepages reported on node 1 00:30:04.131 11:08:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:30:04.131 11:08:21 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:30:04.131 11:08:21 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:30:04.131 11:08:21 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:04.131 11:08:21 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.131 11:08:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:04.131 11:08:21 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.131 11:08:21 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:04.131 11:08:21 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:04.131 11:08:21 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:04.131 11:08:21 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:30:04.131 11:08:21 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:04.131 11:08:21 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:30:04.131 11:08:21 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:04.131 11:08:21 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:04.131 rmmod nvme_tcp 00:30:04.131 rmmod nvme_fabrics 00:30:04.131 rmmod nvme_keyring 00:30:04.393 11:08:21 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:04.393 11:08:21 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:30:04.393 11:08:21 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:30:04.393 11:08:21 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2294503 ']' 00:30:04.393 11:08:21 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2294503 00:30:04.393 11:08:21 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 2294503 ']' 00:30:04.393 11:08:21 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 2294503 00:30:04.393 11:08:21 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:30:04.393 11:08:21 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:04.393 11:08:21 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2294503 00:30:04.393 11:08:21 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:04.393 11:08:21 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:04.393 11:08:21 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2294503' 00:30:04.393 killing process with pid 2294503 00:30:04.393 11:08:21 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 2294503 00:30:04.393 11:08:21 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 2294503 00:30:04.655 11:08:21 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:04.655 11:08:21 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:04.655 11:08:21 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:04.655 11:08:21 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:04.655 11:08:21 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:04.655 11:08:21 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.655 11:08:21 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:04.655 11:08:21 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:07.204 11:08:23 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:07.204 00:30:07.204 real 0m12.763s 00:30:07.204 user 0m9.771s 00:30:07.204 sys 0m6.249s 00:30:07.204 11:08:23 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:07.204 11:08:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:07.204 ************************************ 00:30:07.204 END TEST nvmf_identify_passthru 00:30:07.204 ************************************ 00:30:07.204 11:08:23 -- common/autotest_common.sh@1142 -- # return 0 00:30:07.204 11:08:23 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:07.204 11:08:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:07.204 11:08:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:07.204 11:08:23 -- common/autotest_common.sh@10 -- # set +x 00:30:07.204 ************************************ 00:30:07.205 START TEST nvmf_dif 00:30:07.205 ************************************ 00:30:07.205 11:08:23 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:07.205 * Looking for test storage... 00:30:07.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:07.205 11:08:23 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:07.205 11:08:23 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:30:07.205 11:08:23 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:07.205 11:08:23 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:07.205 11:08:23 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:07.205 11:08:23 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:07.205 11:08:23 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:07.205 11:08:23 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:07.205 11:08:23 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:07.205 11:08:23 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:07.205 11:08:23 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:07.205 11:08:23 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:07.205 11:08:23 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:07.205 11:08:23 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:07.205 11:08:23 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:07.205 11:08:23 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:07.205 11:08:23 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:07.205 11:08:23 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:07.205 11:08:23 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:07.205 11:08:23 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:07.205 11:08:23 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:07.205 11:08:23 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:07.205 11:08:23 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.205 11:08:23 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.205 11:08:23 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.205 11:08:23 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:30:07.205 11:08:23 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.205 11:08:23 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:30:07.205 11:08:23 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:07.205 11:08:23 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:07.205 11:08:23 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:07.205 11:08:23 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:07.205 11:08:23 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:07.205 11:08:23 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:07.205 11:08:23 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:07.205 11:08:23 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:07.205 11:08:23 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:30:07.205 11:08:23 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:07.205 11:08:23 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:07.205 11:08:23 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:30:07.205 11:08:23 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:30:07.205 11:08:23 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:07.205 11:08:23 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:07.205 11:08:23 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:07.205 11:08:23 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:07.205 11:08:23 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:07.205 11:08:23 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:07.205 11:08:23 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:07.205 11:08:23 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:07.205 11:08:23 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:07.205 11:08:23 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:07.205 11:08:23 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:30:07.205 11:08:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:15.353 11:08:30 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:15.353 11:08:30 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:30:15.353 11:08:30 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:15.353 11:08:30 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:15.353 11:08:30 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:15.353 11:08:30 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:15.353 11:08:30 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:15.353 11:08:30 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:30:15.353 11:08:30 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:15.353 11:08:30 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:30:15.353 11:08:30 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:30:15.353 11:08:30 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:30:15.353 11:08:30 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:30:15.353 11:08:30 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:30:15.353 11:08:30 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:30:15.353 11:08:30 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:15.353 11:08:30 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:15.353 11:08:30 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:15.353 11:08:30 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:15.353 11:08:30 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:15.353 11:08:30 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:15.354 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:15.354 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:15.354 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:15.354 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:15.354 11:08:30 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:15.354 11:08:31 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:15.354 11:08:31 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:15.354 11:08:31 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:15.354 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:15.354 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:30:15.354 00:30:15.354 --- 10.0.0.2 ping statistics --- 00:30:15.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:15.354 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:30:15.354 11:08:31 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:15.354 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:15.354 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.341 ms 00:30:15.354 00:30:15.354 --- 10.0.0.1 ping statistics --- 00:30:15.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:15.354 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:30:15.354 11:08:31 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:15.354 11:08:31 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:30:15.354 11:08:31 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:15.354 11:08:31 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:17.903 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:17.903 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:17.903 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:17.903 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:17.903 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:17.903 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:17.903 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:17.903 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:17.903 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:17.903 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:30:17.903 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:17.903 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:17.903 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:17.903 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:17.903 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:17.903 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:17.903 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:17.903 11:08:34 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:17.903 11:08:34 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:17.903 11:08:34 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:17.903 11:08:34 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:17.903 11:08:34 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:17.903 11:08:34 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:17.903 11:08:34 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:17.903 11:08:34 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:17.903 11:08:34 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:17.903 11:08:34 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:17.903 11:08:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:17.903 11:08:34 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2300559 00:30:17.903 11:08:34 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2300559 00:30:17.903 11:08:34 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:17.903 11:08:34 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 2300559 ']' 00:30:17.903 11:08:34 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:17.903 11:08:34 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:17.903 11:08:34 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:17.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:17.903 11:08:34 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:17.903 11:08:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:18.165 [2024-07-12 11:08:34.926818] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:18.165 [2024-07-12 11:08:34.926882] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:18.165 EAL: No free 2048 kB hugepages reported on node 1 00:30:18.165 [2024-07-12 11:08:35.015621] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:18.165 [2024-07-12 11:08:35.110055] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:18.165 [2024-07-12 11:08:35.110113] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:18.165 [2024-07-12 11:08:35.110130] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:18.165 [2024-07-12 11:08:35.110138] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:18.165 [2024-07-12 11:08:35.110144] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:18.165 [2024-07-12 11:08:35.110174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:18.738 11:08:35 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:18.999 11:08:35 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:30:18.999 11:08:35 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:18.999 11:08:35 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:18.999 11:08:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:18.999 11:08:35 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:18.999 11:08:35 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:18.999 11:08:35 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:18.999 11:08:35 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.999 11:08:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:18.999 [2024-07-12 11:08:35.774209] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:18.999 11:08:35 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.999 11:08:35 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:18.999 11:08:35 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:18.999 11:08:35 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:18.999 11:08:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:18.999 ************************************ 00:30:18.999 START TEST fio_dif_1_default 00:30:18.999 ************************************ 00:30:18.999 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:30:18.999 11:08:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:18.999 11:08:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:18.999 11:08:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:19.000 bdev_null0 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:19.000 [2024-07-12 11:08:35.866632] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:19.000 { 00:30:19.000 "params": { 00:30:19.000 "name": "Nvme$subsystem", 00:30:19.000 "trtype": "$TEST_TRANSPORT", 00:30:19.000 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.000 "adrfam": "ipv4", 00:30:19.000 "trsvcid": "$NVMF_PORT", 00:30:19.000 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.000 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.000 "hdgst": ${hdgst:-false}, 00:30:19.000 "ddgst": ${ddgst:-false} 00:30:19.000 }, 00:30:19.000 "method": "bdev_nvme_attach_controller" 00:30:19.000 } 00:30:19.000 EOF 00:30:19.000 )") 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:19.000 "params": { 00:30:19.000 "name": "Nvme0", 00:30:19.000 "trtype": "tcp", 00:30:19.000 "traddr": "10.0.0.2", 00:30:19.000 "adrfam": "ipv4", 00:30:19.000 "trsvcid": "4420", 00:30:19.000 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:19.000 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:19.000 "hdgst": false, 00:30:19.000 "ddgst": false 00:30:19.000 }, 00:30:19.000 "method": "bdev_nvme_attach_controller" 00:30:19.000 }' 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:19.000 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:19.571 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:19.571 fio-3.35 00:30:19.571 Starting 1 thread 00:30:19.571 EAL: No free 2048 kB hugepages reported on node 1 00:30:31.806 00:30:31.806 filename0: (groupid=0, jobs=1): err= 0: pid=2301087: Fri Jul 12 11:08:46 2024 00:30:31.806 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10038msec) 00:30:31.806 slat (nsec): min=5386, max=35535, avg=6313.42, stdev=1824.05 00:30:31.806 clat (usec): min=40948, max=42812, avg=41981.56, stdev=90.11 00:30:31.806 lat (usec): min=40956, max=42847, avg=41987.88, stdev=90.52 00:30:31.806 clat percentiles (usec): 00:30:31.806 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:30:31.806 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:30:31.806 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:31.806 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:30:31.806 | 99.99th=[42730] 00:30:31.806 bw ( KiB/s): min= 351, max= 384, per=99.75%, avg=380.75, stdev=10.00, samples=20 00:30:31.806 iops : min= 87, max= 96, avg=95.15, stdev= 2.62, samples=20 00:30:31.806 lat (msec) : 50=100.00% 00:30:31.806 cpu : usr=95.40%, sys=4.38%, ctx=11, majf=0, minf=237 00:30:31.806 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:31.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:31.806 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:31.806 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:31.806 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:31.806 00:30:31.806 Run status group 0 (all jobs): 00:30:31.806 READ: bw=381KiB/s (390kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=3824KiB (3916kB), run=10038-10038msec 00:30:31.806 11:08:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:31.806 11:08:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:31.806 11:08:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:31.806 11:08:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:31.806 11:08:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:31.807 11:08:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:31.807 11:08:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.807 11:08:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:31.807 11:08:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.807 11:08:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:31.807 11:08:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.807 11:08:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:31.807 11:08:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.807 00:30:31.807 real 0m11.110s 00:30:31.807 user 0m18.450s 00:30:31.807 sys 0m0.843s 00:30:31.807 11:08:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:31.807 11:08:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:31.807 ************************************ 00:30:31.807 END TEST fio_dif_1_default 00:30:31.807 ************************************ 00:30:31.807 11:08:46 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:31.807 11:08:46 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:31.807 11:08:46 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:31.807 11:08:46 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:31.807 11:08:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:31.807 ************************************ 00:30:31.807 START TEST fio_dif_1_multi_subsystems 00:30:31.807 ************************************ 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:31.807 bdev_null0 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:31.807 [2024-07-12 11:08:47.050331] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:31.807 bdev_null1 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:31.807 { 00:30:31.807 "params": { 00:30:31.807 "name": "Nvme$subsystem", 00:30:31.807 "trtype": "$TEST_TRANSPORT", 00:30:31.807 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:31.807 "adrfam": "ipv4", 00:30:31.807 "trsvcid": "$NVMF_PORT", 00:30:31.807 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:31.807 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:31.807 "hdgst": ${hdgst:-false}, 00:30:31.807 "ddgst": ${ddgst:-false} 00:30:31.807 }, 00:30:31.807 "method": "bdev_nvme_attach_controller" 00:30:31.807 } 00:30:31.807 EOF 00:30:31.807 )") 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:31.807 { 00:30:31.807 "params": { 00:30:31.807 "name": "Nvme$subsystem", 00:30:31.807 "trtype": "$TEST_TRANSPORT", 00:30:31.807 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:31.807 "adrfam": "ipv4", 00:30:31.807 "trsvcid": "$NVMF_PORT", 00:30:31.807 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:31.807 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:31.807 "hdgst": ${hdgst:-false}, 00:30:31.807 "ddgst": ${ddgst:-false} 00:30:31.807 }, 00:30:31.807 "method": "bdev_nvme_attach_controller" 00:30:31.807 } 00:30:31.807 EOF 00:30:31.807 )") 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:30:31.807 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:31.807 "params": { 00:30:31.807 "name": "Nvme0", 00:30:31.807 "trtype": "tcp", 00:30:31.807 "traddr": "10.0.0.2", 00:30:31.807 "adrfam": "ipv4", 00:30:31.807 "trsvcid": "4420", 00:30:31.807 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:31.807 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:31.807 "hdgst": false, 00:30:31.807 "ddgst": false 00:30:31.807 }, 00:30:31.807 "method": "bdev_nvme_attach_controller" 00:30:31.807 },{ 00:30:31.807 "params": { 00:30:31.807 "name": "Nvme1", 00:30:31.807 "trtype": "tcp", 00:30:31.808 "traddr": "10.0.0.2", 00:30:31.808 "adrfam": "ipv4", 00:30:31.808 "trsvcid": "4420", 00:30:31.808 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:31.808 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:31.808 "hdgst": false, 00:30:31.808 "ddgst": false 00:30:31.808 }, 00:30:31.808 "method": "bdev_nvme_attach_controller" 00:30:31.808 }' 00:30:31.808 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:31.808 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:31.808 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:31.808 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:31.808 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:31.808 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:31.808 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:31.808 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:31.808 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:31.808 11:08:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:31.808 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:31.808 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:31.808 fio-3.35 00:30:31.808 Starting 2 threads 00:30:31.808 EAL: No free 2048 kB hugepages reported on node 1 00:30:41.903 00:30:41.903 filename0: (groupid=0, jobs=1): err= 0: pid=2303291: Fri Jul 12 11:08:58 2024 00:30:41.903 read: IOPS=185, BW=741KiB/s (759kB/s)(7424KiB/10020msec) 00:30:41.903 slat (nsec): min=5388, max=32598, avg=6332.33, stdev=2255.46 00:30:41.903 clat (usec): min=918, max=43278, avg=21576.23, stdev=20310.43 00:30:41.903 lat (usec): min=923, max=43310, avg=21582.56, stdev=20310.38 00:30:41.903 clat percentiles (usec): 00:30:41.903 | 1.00th=[ 1057], 5.00th=[ 1123], 10.00th=[ 1139], 20.00th=[ 1172], 00:30:41.903 | 30.00th=[ 1188], 40.00th=[ 1221], 50.00th=[41681], 60.00th=[41681], 00:30:41.903 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:30:41.903 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:30:41.903 | 99.99th=[43254] 00:30:41.903 bw ( KiB/s): min= 672, max= 768, per=49.88%, avg=740.80, stdev=33.28, samples=20 00:30:41.903 iops : min= 168, max= 192, avg=185.20, stdev= 8.32, samples=20 00:30:41.903 lat (usec) : 1000=0.43% 00:30:41.903 lat (msec) : 2=49.35%, 50=50.22% 00:30:41.903 cpu : usr=96.89%, sys=2.90%, ctx=13, majf=0, minf=76 00:30:41.903 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:41.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.903 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:41.903 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:41.903 filename1: (groupid=0, jobs=1): err= 0: pid=2303292: Fri Jul 12 11:08:58 2024 00:30:41.903 read: IOPS=185, BW=743KiB/s (761kB/s)(7440KiB/10015msec) 00:30:41.903 slat (nsec): min=5385, max=32503, avg=6381.36, stdev=2308.63 00:30:41.904 clat (usec): min=820, max=42337, avg=21518.95, stdev=20287.99 00:30:41.904 lat (usec): min=825, max=42369, avg=21525.33, stdev=20287.91 00:30:41.904 clat percentiles (usec): 00:30:41.904 | 1.00th=[ 898], 5.00th=[ 1123], 10.00th=[ 1139], 20.00th=[ 1188], 00:30:41.904 | 30.00th=[ 1205], 40.00th=[ 1237], 50.00th=[41157], 60.00th=[41681], 00:30:41.904 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:30:41.904 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:41.904 | 99.99th=[42206] 00:30:41.904 bw ( KiB/s): min= 704, max= 768, per=50.02%, avg=742.40, stdev=32.17, samples=20 00:30:41.904 iops : min= 176, max= 192, avg=185.60, stdev= 8.04, samples=20 00:30:41.904 lat (usec) : 1000=1.29% 00:30:41.904 lat (msec) : 2=48.60%, 50=50.11% 00:30:41.904 cpu : usr=96.68%, sys=3.10%, ctx=34, majf=0, minf=159 00:30:41.904 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:41.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.904 issued rwts: total=1860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:41.904 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:41.904 00:30:41.904 Run status group 0 (all jobs): 00:30:41.904 READ: bw=1483KiB/s (1519kB/s), 741KiB/s-743KiB/s (759kB/s-761kB/s), io=14.5MiB (15.2MB), run=10015-10020msec 00:30:41.904 11:08:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:41.904 11:08:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:30:41.904 11:08:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:41.904 11:08:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:41.904 11:08:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:30:41.904 11:08:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:41.904 11:08:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.904 11:08:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:41.904 11:08:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.904 11:08:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:41.904 11:08:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.904 11:08:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:41.904 11:08:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.904 11:08:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:41.904 11:08:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:41.904 11:08:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:30:41.904 11:08:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:41.904 11:08:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.904 11:08:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:41.904 11:08:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.904 11:08:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:41.904 11:08:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.904 11:08:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:41.904 11:08:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.904 00:30:41.904 real 0m11.377s 00:30:41.904 user 0m34.764s 00:30:41.904 sys 0m0.942s 00:30:41.904 11:08:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:41.904 11:08:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:41.904 ************************************ 00:30:41.904 END TEST fio_dif_1_multi_subsystems 00:30:41.904 ************************************ 00:30:41.904 11:08:58 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:41.904 11:08:58 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:41.904 11:08:58 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:41.904 11:08:58 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:41.904 11:08:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:41.904 ************************************ 00:30:41.904 START TEST fio_dif_rand_params 00:30:41.904 ************************************ 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:41.904 bdev_null0 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:41.904 [2024-07-12 11:08:58.508769] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:41.904 11:08:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:41.904 { 00:30:41.904 "params": { 00:30:41.904 "name": "Nvme$subsystem", 00:30:41.904 "trtype": "$TEST_TRANSPORT", 00:30:41.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:41.904 "adrfam": "ipv4", 00:30:41.904 "trsvcid": "$NVMF_PORT", 00:30:41.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:41.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:41.904 "hdgst": ${hdgst:-false}, 00:30:41.904 "ddgst": ${ddgst:-false} 00:30:41.904 }, 00:30:41.905 "method": "bdev_nvme_attach_controller" 00:30:41.905 } 00:30:41.905 EOF 00:30:41.905 )") 00:30:41.905 11:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:41.905 11:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:41.905 11:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:41.905 11:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:41.905 11:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:41.905 11:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:41.905 11:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:41.905 11:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:41.905 11:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:41.905 11:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:41.905 11:08:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:41.905 11:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:41.905 11:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:41.905 11:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:41.905 11:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:41.905 11:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:41.905 11:08:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:41.905 11:08:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:41.905 11:08:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:41.905 "params": { 00:30:41.905 "name": "Nvme0", 00:30:41.905 "trtype": "tcp", 00:30:41.905 "traddr": "10.0.0.2", 00:30:41.905 "adrfam": "ipv4", 00:30:41.905 "trsvcid": "4420", 00:30:41.905 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:41.905 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:41.905 "hdgst": false, 00:30:41.905 "ddgst": false 00:30:41.905 }, 00:30:41.905 "method": "bdev_nvme_attach_controller" 00:30:41.905 }' 00:30:41.905 11:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:41.905 11:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:41.905 11:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:41.905 11:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:41.905 11:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:41.905 11:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:41.905 11:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:41.905 11:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:41.905 11:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:41.905 11:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:42.173 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:42.173 ... 00:30:42.173 fio-3.35 00:30:42.173 Starting 3 threads 00:30:42.173 EAL: No free 2048 kB hugepages reported on node 1 00:30:48.734 00:30:48.734 filename0: (groupid=0, jobs=1): err= 0: pid=2305778: Fri Jul 12 11:09:04 2024 00:30:48.734 read: IOPS=221, BW=27.6MiB/s (29.0MB/s)(139MiB/5045msec) 00:30:48.734 slat (nsec): min=5433, max=49413, avg=7528.50, stdev=2017.17 00:30:48.734 clat (usec): min=3777, max=90479, avg=13523.95, stdev=15373.19 00:30:48.734 lat (usec): min=3783, max=90485, avg=13531.48, stdev=15373.28 00:30:48.734 clat percentiles (usec): 00:30:48.734 | 1.00th=[ 4555], 5.00th=[ 5145], 10.00th=[ 5473], 20.00th=[ 6128], 00:30:48.734 | 30.00th=[ 6652], 40.00th=[ 7177], 50.00th=[ 7701], 60.00th=[ 8291], 00:30:48.734 | 70.00th=[ 8979], 80.00th=[ 9896], 90.00th=[47973], 95.00th=[49546], 00:30:48.734 | 99.00th=[51643], 99.50th=[88605], 99.90th=[90702], 99.95th=[90702], 00:30:48.734 | 99.99th=[90702] 00:30:48.734 bw ( KiB/s): min=12288, max=48384, per=36.58%, avg=28492.80, stdev=10497.08, samples=10 00:30:48.734 iops : min= 96, max= 378, avg=222.60, stdev=82.01, samples=10 00:30:48.734 lat (msec) : 4=0.09%, 10=80.99%, 20=4.75%, 50=10.04%, 100=4.13% 00:30:48.734 cpu : usr=95.48%, sys=4.18%, ctx=16, majf=0, minf=121 00:30:48.734 IO depths : 1=3.5%, 2=96.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:48.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.734 issued rwts: total=1115,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:48.734 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:48.734 filename0: (groupid=0, jobs=1): err= 0: pid=2305779: Fri Jul 12 11:09:04 2024 00:30:48.734 read: IOPS=216, BW=27.0MiB/s (28.3MB/s)(136MiB/5019msec) 00:30:48.734 slat (nsec): min=5414, max=33472, avg=7333.68, stdev=1737.57 00:30:48.734 clat (usec): min=4103, max=91426, avg=13867.30, stdev=14695.57 00:30:48.734 lat (usec): min=4112, max=91432, avg=13874.64, stdev=14695.69 00:30:48.734 clat percentiles (usec): 00:30:48.734 | 1.00th=[ 4555], 5.00th=[ 5538], 10.00th=[ 5997], 20.00th=[ 6718], 00:30:48.734 | 30.00th=[ 7308], 40.00th=[ 7767], 50.00th=[ 8225], 60.00th=[ 8717], 00:30:48.734 | 70.00th=[ 9372], 80.00th=[10290], 90.00th=[48497], 95.00th=[49546], 00:30:48.734 | 99.00th=[51119], 99.50th=[51643], 99.90th=[51643], 99.95th=[91751], 00:30:48.734 | 99.99th=[91751] 00:30:48.734 bw ( KiB/s): min=16128, max=45312, per=35.56%, avg=27699.20, stdev=9663.26, samples=10 00:30:48.734 iops : min= 126, max= 354, avg=216.40, stdev=75.49, samples=10 00:30:48.734 lat (msec) : 10=78.06%, 20=7.65%, 50=10.88%, 100=3.41% 00:30:48.734 cpu : usr=95.44%, sys=4.28%, ctx=10, majf=0, minf=101 00:30:48.734 IO depths : 1=2.3%, 2=97.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:48.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.734 issued rwts: total=1085,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:48.734 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:48.734 filename0: (groupid=0, jobs=1): err= 0: pid=2305780: Fri Jul 12 11:09:04 2024 00:30:48.734 read: IOPS=173, BW=21.7MiB/s (22.8MB/s)(109MiB/5005msec) 00:30:48.734 slat (nsec): min=5412, max=31684, avg=6359.01, stdev=1347.63 00:30:48.734 clat (usec): min=3969, max=90767, avg=17242.84, stdev=18038.46 00:30:48.734 lat (usec): min=3975, max=90774, avg=17249.20, stdev=18038.79 00:30:48.734 clat percentiles (usec): 00:30:48.734 | 1.00th=[ 4490], 5.00th=[ 5538], 10.00th=[ 6259], 20.00th=[ 7111], 00:30:48.734 | 30.00th=[ 7767], 40.00th=[ 8225], 50.00th=[ 8848], 60.00th=[ 9634], 00:30:48.734 | 70.00th=[10421], 80.00th=[47449], 90.00th=[49546], 95.00th=[50594], 00:30:48.734 | 99.00th=[88605], 99.50th=[89654], 99.90th=[90702], 99.95th=[90702], 00:30:48.734 | 99.99th=[90702] 00:30:48.734 bw ( KiB/s): min=10752, max=43008, per=28.53%, avg=22220.80, stdev=10514.92, samples=10 00:30:48.734 iops : min= 84, max= 336, avg=173.60, stdev=82.15, samples=10 00:30:48.734 lat (msec) : 4=0.11%, 10=65.06%, 20=14.14%, 50=12.53%, 100=8.16% 00:30:48.734 cpu : usr=96.20%, sys=3.50%, ctx=12, majf=0, minf=119 00:30:48.734 IO depths : 1=5.6%, 2=94.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:48.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.734 issued rwts: total=870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:48.734 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:48.734 00:30:48.734 Run status group 0 (all jobs): 00:30:48.734 READ: bw=76.1MiB/s (79.8MB/s), 21.7MiB/s-27.6MiB/s (22.8MB/s-29.0MB/s), io=384MiB (402MB), run=5005-5045msec 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:48.734 bdev_null0 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:48.734 [2024-07-12 11:09:04.773728] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:48.734 bdev_null1 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.734 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:48.735 bdev_null2 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:48.735 { 00:30:48.735 "params": { 00:30:48.735 "name": "Nvme$subsystem", 00:30:48.735 "trtype": "$TEST_TRANSPORT", 00:30:48.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:48.735 "adrfam": "ipv4", 00:30:48.735 "trsvcid": "$NVMF_PORT", 00:30:48.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:48.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:48.735 "hdgst": ${hdgst:-false}, 00:30:48.735 "ddgst": ${ddgst:-false} 00:30:48.735 }, 00:30:48.735 "method": "bdev_nvme_attach_controller" 00:30:48.735 } 00:30:48.735 EOF 00:30:48.735 )") 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:48.735 { 00:30:48.735 "params": { 00:30:48.735 "name": "Nvme$subsystem", 00:30:48.735 "trtype": "$TEST_TRANSPORT", 00:30:48.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:48.735 "adrfam": "ipv4", 00:30:48.735 "trsvcid": "$NVMF_PORT", 00:30:48.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:48.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:48.735 "hdgst": ${hdgst:-false}, 00:30:48.735 "ddgst": ${ddgst:-false} 00:30:48.735 }, 00:30:48.735 "method": "bdev_nvme_attach_controller" 00:30:48.735 } 00:30:48.735 EOF 00:30:48.735 )") 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:48.735 { 00:30:48.735 "params": { 00:30:48.735 "name": "Nvme$subsystem", 00:30:48.735 "trtype": "$TEST_TRANSPORT", 00:30:48.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:48.735 "adrfam": "ipv4", 00:30:48.735 "trsvcid": "$NVMF_PORT", 00:30:48.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:48.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:48.735 "hdgst": ${hdgst:-false}, 00:30:48.735 "ddgst": ${ddgst:-false} 00:30:48.735 }, 00:30:48.735 "method": "bdev_nvme_attach_controller" 00:30:48.735 } 00:30:48.735 EOF 00:30:48.735 )") 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:48.735 "params": { 00:30:48.735 "name": "Nvme0", 00:30:48.735 "trtype": "tcp", 00:30:48.735 "traddr": "10.0.0.2", 00:30:48.735 "adrfam": "ipv4", 00:30:48.735 "trsvcid": "4420", 00:30:48.735 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:48.735 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:48.735 "hdgst": false, 00:30:48.735 "ddgst": false 00:30:48.735 }, 00:30:48.735 "method": "bdev_nvme_attach_controller" 00:30:48.735 },{ 00:30:48.735 "params": { 00:30:48.735 "name": "Nvme1", 00:30:48.735 "trtype": "tcp", 00:30:48.735 "traddr": "10.0.0.2", 00:30:48.735 "adrfam": "ipv4", 00:30:48.735 "trsvcid": "4420", 00:30:48.735 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:48.735 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:48.735 "hdgst": false, 00:30:48.735 "ddgst": false 00:30:48.735 }, 00:30:48.735 "method": "bdev_nvme_attach_controller" 00:30:48.735 },{ 00:30:48.735 "params": { 00:30:48.735 "name": "Nvme2", 00:30:48.735 "trtype": "tcp", 00:30:48.735 "traddr": "10.0.0.2", 00:30:48.735 "adrfam": "ipv4", 00:30:48.735 "trsvcid": "4420", 00:30:48.735 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:48.735 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:48.735 "hdgst": false, 00:30:48.735 "ddgst": false 00:30:48.735 }, 00:30:48.735 "method": "bdev_nvme_attach_controller" 00:30:48.735 }' 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:48.735 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:48.735 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:48.735 ... 00:30:48.735 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:48.735 ... 00:30:48.735 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:48.735 ... 00:30:48.735 fio-3.35 00:30:48.735 Starting 24 threads 00:30:48.735 EAL: No free 2048 kB hugepages reported on node 1 00:31:01.071 00:31:01.071 filename0: (groupid=0, jobs=1): err= 0: pid=2307218: Fri Jul 12 11:09:16 2024 00:31:01.071 read: IOPS=688, BW=2753KiB/s (2819kB/s)(26.9MiB/10017msec) 00:31:01.071 slat (usec): min=5, max=119, avg=14.56, stdev=13.14 00:31:01.071 clat (usec): min=8544, max=53542, avg=23120.76, stdev=3549.30 00:31:01.071 lat (usec): min=8551, max=53574, avg=23135.33, stdev=3550.25 00:31:01.071 clat percentiles (usec): 00:31:01.071 | 1.00th=[11994], 5.00th=[16450], 10.00th=[19006], 20.00th=[22676], 00:31:01.071 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23462], 00:31:01.071 | 70.00th=[23725], 80.00th=[24249], 90.00th=[24773], 95.00th=[27132], 00:31:01.071 | 99.00th=[34866], 99.50th=[37487], 99.90th=[53216], 99.95th=[53740], 00:31:01.071 | 99.99th=[53740] 00:31:01.071 bw ( KiB/s): min= 2584, max= 2965, per=4.24%, avg=2756.70, stdev=93.75, samples=20 00:31:01.071 iops : min= 646, max= 741, avg=689.10, stdev=23.42, samples=20 00:31:01.071 lat (msec) : 10=0.15%, 20=12.20%, 50=87.42%, 100=0.23% 00:31:01.071 cpu : usr=98.32%, sys=1.10%, ctx=84, majf=0, minf=34 00:31:01.071 IO depths : 1=3.1%, 2=6.4%, 4=19.3%, 8=61.5%, 16=9.7%, 32=0.0%, >=64=0.0% 00:31:01.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.071 complete : 0=0.0%, 4=92.9%, 8=1.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.071 issued rwts: total=6894,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.071 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.071 filename0: (groupid=0, jobs=1): err= 0: pid=2307219: Fri Jul 12 11:09:16 2024 00:31:01.071 read: IOPS=674, BW=2696KiB/s (2761kB/s)(26.3MiB/10002msec) 00:31:01.071 slat (usec): min=5, max=112, avg=17.25, stdev=13.86 00:31:01.071 clat (usec): min=11449, max=42246, avg=23603.85, stdev=2740.71 00:31:01.071 lat (usec): min=11455, max=42263, avg=23621.10, stdev=2740.61 00:31:01.071 clat percentiles (usec): 00:31:01.071 | 1.00th=[15270], 5.00th=[19268], 10.00th=[22414], 20.00th=[22938], 00:31:01.071 | 30.00th=[23200], 40.00th=[23200], 50.00th=[23462], 60.00th=[23462], 00:31:01.071 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24773], 95.00th=[28181], 00:31:01.071 | 99.00th=[33817], 99.50th=[35914], 99.90th=[42206], 99.95th=[42206], 00:31:01.071 | 99.99th=[42206] 00:31:01.071 bw ( KiB/s): min= 2560, max= 3024, per=4.15%, avg=2697.53, stdev=105.92, samples=19 00:31:01.071 iops : min= 640, max= 756, avg=674.37, stdev=26.48, samples=19 00:31:01.071 lat (msec) : 20=5.65%, 50=94.35% 00:31:01.071 cpu : usr=98.93%, sys=0.75%, ctx=28, majf=0, minf=25 00:31:01.071 IO depths : 1=3.6%, 2=7.8%, 4=19.4%, 8=59.6%, 16=9.6%, 32=0.0%, >=64=0.0% 00:31:01.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.072 complete : 0=0.0%, 4=92.7%, 8=2.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.072 issued rwts: total=6742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.072 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.072 filename0: (groupid=0, jobs=1): err= 0: pid=2307220: Fri Jul 12 11:09:16 2024 00:31:01.072 read: IOPS=673, BW=2693KiB/s (2757kB/s)(26.4MiB/10052msec) 00:31:01.072 slat (usec): min=5, max=110, avg=12.16, stdev=12.10 00:31:01.072 clat (usec): min=11360, max=65249, avg=23643.77, stdev=3403.88 00:31:01.072 lat (usec): min=11366, max=65255, avg=23655.93, stdev=3404.73 00:31:01.072 clat percentiles (usec): 00:31:01.072 | 1.00th=[14484], 5.00th=[17695], 10.00th=[22414], 20.00th=[22938], 00:31:01.072 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:31:01.072 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24773], 95.00th=[28705], 00:31:01.072 | 99.00th=[36439], 99.50th=[41681], 99.90th=[52691], 99.95th=[65274], 00:31:01.072 | 99.99th=[65274] 00:31:01.072 bw ( KiB/s): min= 2528, max= 2864, per=4.16%, avg=2702.10, stdev=98.57, samples=20 00:31:01.072 iops : min= 632, max= 716, avg=675.50, stdev=24.65, samples=20 00:31:01.072 lat (msec) : 20=6.84%, 50=93.00%, 100=0.16% 00:31:01.072 cpu : usr=99.05%, sys=0.60%, ctx=94, majf=0, minf=22 00:31:01.072 IO depths : 1=3.9%, 2=7.8%, 4=19.2%, 8=60.3%, 16=8.9%, 32=0.0%, >=64=0.0% 00:31:01.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.072 complete : 0=0.0%, 4=92.5%, 8=2.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.072 issued rwts: total=6767,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.072 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.072 filename0: (groupid=0, jobs=1): err= 0: pid=2307221: Fri Jul 12 11:09:16 2024 00:31:01.072 read: IOPS=683, BW=2732KiB/s (2798kB/s)(26.7MiB/10019msec) 00:31:01.072 slat (nsec): min=5466, max=99744, avg=17940.71, stdev=15974.25 00:31:01.072 clat (usec): min=13095, max=39121, avg=23268.27, stdev=1797.88 00:31:01.072 lat (usec): min=13104, max=39144, avg=23286.21, stdev=1798.18 00:31:01.072 clat percentiles (usec): 00:31:01.072 | 1.00th=[15270], 5.00th=[20841], 10.00th=[22414], 20.00th=[22938], 00:31:01.072 | 30.00th=[23200], 40.00th=[23200], 50.00th=[23462], 60.00th=[23462], 00:31:01.072 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24773], 00:31:01.072 | 99.00th=[26084], 99.50th=[29230], 99.90th=[36439], 99.95th=[38536], 00:31:01.072 | 99.99th=[39060] 00:31:01.072 bw ( KiB/s): min= 2560, max= 2992, per=4.20%, avg=2730.90, stdev=94.18, samples=20 00:31:01.072 iops : min= 640, max= 748, avg=682.70, stdev=23.56, samples=20 00:31:01.072 lat (msec) : 20=4.28%, 50=95.72% 00:31:01.072 cpu : usr=98.04%, sys=1.07%, ctx=71, majf=0, minf=16 00:31:01.072 IO depths : 1=5.8%, 2=11.7%, 4=23.9%, 8=51.8%, 16=6.7%, 32=0.0%, >=64=0.0% 00:31:01.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.072 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.072 issued rwts: total=6844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.072 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.072 filename0: (groupid=0, jobs=1): err= 0: pid=2307222: Fri Jul 12 11:09:16 2024 00:31:01.072 read: IOPS=681, BW=2726KiB/s (2791kB/s)(26.6MiB/10003msec) 00:31:01.072 slat (usec): min=5, max=114, avg=17.83, stdev=15.21 00:31:01.072 clat (usec): min=7159, max=40485, avg=23348.93, stdev=3491.82 00:31:01.072 lat (usec): min=7167, max=40530, avg=23366.76, stdev=3492.69 00:31:01.072 clat percentiles (usec): 00:31:01.072 | 1.00th=[13173], 5.00th=[16450], 10.00th=[19792], 20.00th=[22676], 00:31:01.072 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23462], 00:31:01.072 | 70.00th=[23725], 80.00th=[24249], 90.00th=[25560], 95.00th=[30016], 00:31:01.072 | 99.00th=[34866], 99.50th=[37487], 99.90th=[39584], 99.95th=[40633], 00:31:01.072 | 99.99th=[40633] 00:31:01.072 bw ( KiB/s): min= 2560, max= 2896, per=4.19%, avg=2721.32, stdev=93.68, samples=19 00:31:01.072 iops : min= 640, max= 724, avg=680.26, stdev=23.46, samples=19 00:31:01.072 lat (msec) : 10=0.09%, 20=10.24%, 50=89.67% 00:31:01.072 cpu : usr=96.79%, sys=1.75%, ctx=94, majf=0, minf=17 00:31:01.072 IO depths : 1=2.9%, 2=5.8%, 4=15.1%, 8=65.1%, 16=11.1%, 32=0.0%, >=64=0.0% 00:31:01.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.072 complete : 0=0.0%, 4=91.8%, 8=4.0%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.072 issued rwts: total=6816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.072 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.072 filename0: (groupid=0, jobs=1): err= 0: pid=2307223: Fri Jul 12 11:09:16 2024 00:31:01.072 read: IOPS=676, BW=2706KiB/s (2771kB/s)(26.4MiB/10006msec) 00:31:01.072 slat (usec): min=5, max=113, avg=29.66, stdev=20.02 00:31:01.072 clat (usec): min=16566, max=34928, avg=23360.08, stdev=980.78 00:31:01.072 lat (usec): min=16575, max=34960, avg=23389.73, stdev=980.90 00:31:01.072 clat percentiles (usec): 00:31:01.072 | 1.00th=[21365], 5.00th=[22414], 10.00th=[22676], 20.00th=[22938], 00:31:01.072 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23200], 60.00th=[23462], 00:31:01.072 | 70.00th=[23725], 80.00th=[23725], 90.00th=[24249], 95.00th=[24511], 00:31:01.072 | 99.00th=[25560], 99.50th=[26084], 99.90th=[34866], 99.95th=[34866], 00:31:01.072 | 99.99th=[34866] 00:31:01.072 bw ( KiB/s): min= 2554, max= 2816, per=4.16%, avg=2701.42, stdev=72.72, samples=19 00:31:01.072 iops : min= 638, max= 704, avg=675.32, stdev=18.26, samples=19 00:31:01.072 lat (msec) : 20=0.62%, 50=99.38% 00:31:01.072 cpu : usr=97.02%, sys=1.56%, ctx=73, majf=0, minf=19 00:31:01.072 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:01.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.072 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.072 issued rwts: total=6768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.072 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.072 filename0: (groupid=0, jobs=1): err= 0: pid=2307224: Fri Jul 12 11:09:16 2024 00:31:01.072 read: IOPS=663, BW=2656KiB/s (2719kB/s)(25.9MiB/10004msec) 00:31:01.072 slat (usec): min=5, max=114, avg=20.80, stdev=16.10 00:31:01.072 clat (usec): min=3892, max=49189, avg=23960.42, stdev=3900.03 00:31:01.072 lat (usec): min=3898, max=49207, avg=23981.22, stdev=3899.86 00:31:01.072 clat percentiles (usec): 00:31:01.072 | 1.00th=[12256], 5.00th=[17957], 10.00th=[21890], 20.00th=[22938], 00:31:01.072 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:31:01.072 | 70.00th=[23987], 80.00th=[24773], 90.00th=[28181], 95.00th=[31327], 00:31:01.072 | 99.00th=[36963], 99.50th=[39060], 99.90th=[44303], 99.95th=[49021], 00:31:01.072 | 99.99th=[49021] 00:31:01.072 bw ( KiB/s): min= 2432, max= 2768, per=4.07%, avg=2641.32, stdev=79.30, samples=19 00:31:01.072 iops : min= 608, max= 692, avg=660.26, stdev=19.85, samples=19 00:31:01.072 lat (msec) : 4=0.09%, 10=0.39%, 20=6.71%, 50=92.80% 00:31:01.072 cpu : usr=98.26%, sys=1.11%, ctx=90, majf=0, minf=23 00:31:01.072 IO depths : 1=1.9%, 2=4.0%, 4=12.0%, 8=69.7%, 16=12.4%, 32=0.0%, >=64=0.0% 00:31:01.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.072 complete : 0=0.0%, 4=91.2%, 8=4.7%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.072 issued rwts: total=6642,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.072 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.072 filename0: (groupid=0, jobs=1): err= 0: pid=2307226: Fri Jul 12 11:09:16 2024 00:31:01.072 read: IOPS=677, BW=2710KiB/s (2775kB/s)(26.5MiB/10013msec) 00:31:01.072 slat (usec): min=5, max=114, avg=20.48, stdev=18.89 00:31:01.072 clat (usec): min=11425, max=45030, avg=23472.88, stdev=3411.09 00:31:01.072 lat (usec): min=11431, max=45050, avg=23493.36, stdev=3411.69 00:31:01.072 clat percentiles (usec): 00:31:01.072 | 1.00th=[14484], 5.00th=[16909], 10.00th=[20055], 20.00th=[22676], 00:31:01.072 | 30.00th=[23200], 40.00th=[23200], 50.00th=[23462], 60.00th=[23462], 00:31:01.072 | 70.00th=[23725], 80.00th=[23987], 90.00th=[25035], 95.00th=[29754], 00:31:01.072 | 99.00th=[35914], 99.50th=[39060], 99.90th=[41681], 99.95th=[44827], 00:31:01.072 | 99.99th=[44827] 00:31:01.072 bw ( KiB/s): min= 2528, max= 2912, per=4.18%, avg=2712.50, stdev=95.33, samples=20 00:31:01.072 iops : min= 632, max= 728, avg=678.10, stdev=23.85, samples=20 00:31:01.072 lat (msec) : 20=10.05%, 50=89.95% 00:31:01.072 cpu : usr=98.79%, sys=0.86%, ctx=63, majf=0, minf=20 00:31:01.072 IO depths : 1=1.0%, 2=2.3%, 4=9.4%, 8=72.4%, 16=15.0%, 32=0.0%, >=64=0.0% 00:31:01.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.072 complete : 0=0.0%, 4=92.2%, 8=3.6%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.072 issued rwts: total=6784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.072 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.072 filename1: (groupid=0, jobs=1): err= 0: pid=2307227: Fri Jul 12 11:09:16 2024 00:31:01.072 read: IOPS=659, BW=2639KiB/s (2703kB/s)(25.8MiB/10002msec) 00:31:01.072 slat (nsec): min=5464, max=98315, avg=14057.70, stdev=11234.02 00:31:01.072 clat (usec): min=4262, max=47295, avg=24153.34, stdev=3515.32 00:31:01.072 lat (usec): min=4268, max=47314, avg=24167.40, stdev=3515.72 00:31:01.072 clat percentiles (usec): 00:31:01.072 | 1.00th=[14222], 5.00th=[20579], 10.00th=[22414], 20.00th=[22938], 00:31:01.072 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:31:01.072 | 70.00th=[23987], 80.00th=[24249], 90.00th=[27657], 95.00th=[31851], 00:31:01.072 | 99.00th=[36439], 99.50th=[38011], 99.90th=[47449], 99.95th=[47449], 00:31:01.072 | 99.99th=[47449] 00:31:01.072 bw ( KiB/s): min= 2304, max= 2800, per=4.06%, avg=2635.47, stdev=119.90, samples=19 00:31:01.072 iops : min= 576, max= 700, avg=658.84, stdev=29.97, samples=19 00:31:01.072 lat (msec) : 10=0.42%, 20=3.97%, 50=95.61% 00:31:01.072 cpu : usr=98.92%, sys=0.75%, ctx=94, majf=0, minf=25 00:31:01.072 IO depths : 1=1.1%, 2=4.9%, 4=18.1%, 8=63.6%, 16=12.4%, 32=0.0%, >=64=0.0% 00:31:01.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.072 complete : 0=0.0%, 4=92.6%, 8=2.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.072 issued rwts: total=6600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.072 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.072 filename1: (groupid=0, jobs=1): err= 0: pid=2307228: Fri Jul 12 11:09:16 2024 00:31:01.072 read: IOPS=672, BW=2691KiB/s (2756kB/s)(26.3MiB/10006msec) 00:31:01.072 slat (nsec): min=5570, max=91862, avg=13760.49, stdev=10756.58 00:31:01.072 clat (usec): min=7531, max=41223, avg=23667.01, stdev=2727.45 00:31:01.072 lat (usec): min=7537, max=41246, avg=23680.77, stdev=2727.82 00:31:01.072 clat percentiles (usec): 00:31:01.072 | 1.00th=[13960], 5.00th=[22152], 10.00th=[22676], 20.00th=[22938], 00:31:01.072 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:31:01.072 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[28967], 00:31:01.072 | 99.00th=[33817], 99.50th=[36963], 99.90th=[41157], 99.95th=[41157], 00:31:01.072 | 99.99th=[41157] 00:31:01.072 bw ( KiB/s): min= 2528, max= 2880, per=4.13%, avg=2681.79, stdev=96.23, samples=19 00:31:01.072 iops : min= 632, max= 720, avg=670.32, stdev=24.02, samples=19 00:31:01.072 lat (msec) : 10=0.24%, 20=3.77%, 50=95.99% 00:31:01.072 cpu : usr=99.03%, sys=0.64%, ctx=64, majf=0, minf=25 00:31:01.072 IO depths : 1=4.5%, 2=9.6%, 4=21.8%, 8=55.7%, 16=8.3%, 32=0.0%, >=64=0.0% 00:31:01.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.072 complete : 0=0.0%, 4=93.4%, 8=1.2%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.072 issued rwts: total=6732,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.073 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.073 filename1: (groupid=0, jobs=1): err= 0: pid=2307229: Fri Jul 12 11:09:16 2024 00:31:01.073 read: IOPS=663, BW=2655KiB/s (2719kB/s)(25.9MiB/10002msec) 00:31:01.073 slat (usec): min=5, max=110, avg=16.71, stdev=14.31 00:31:01.073 clat (usec): min=3551, max=54036, avg=24024.10, stdev=3539.58 00:31:01.073 lat (usec): min=3558, max=54053, avg=24040.81, stdev=3539.74 00:31:01.073 clat percentiles (usec): 00:31:01.073 | 1.00th=[13173], 5.00th=[21627], 10.00th=[22676], 20.00th=[23200], 00:31:01.073 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:31:01.073 | 70.00th=[23987], 80.00th=[24249], 90.00th=[26084], 95.00th=[31327], 00:31:01.073 | 99.00th=[35390], 99.50th=[36963], 99.90th=[53740], 99.95th=[54264], 00:31:01.073 | 99.99th=[54264] 00:31:01.073 bw ( KiB/s): min= 2308, max= 2768, per=4.07%, avg=2645.32, stdev=110.25, samples=19 00:31:01.073 iops : min= 577, max= 692, avg=661.26, stdev=27.52, samples=19 00:31:01.073 lat (msec) : 4=0.09%, 10=0.54%, 20=3.55%, 50=95.57%, 100=0.24% 00:31:01.073 cpu : usr=96.04%, sys=2.20%, ctx=182, majf=0, minf=23 00:31:01.073 IO depths : 1=0.5%, 2=1.0%, 4=4.6%, 8=78.5%, 16=15.4%, 32=0.0%, >=64=0.0% 00:31:01.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.073 complete : 0=0.0%, 4=89.7%, 8=7.8%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.073 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.073 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.073 filename1: (groupid=0, jobs=1): err= 0: pid=2307230: Fri Jul 12 11:09:16 2024 00:31:01.073 read: IOPS=687, BW=2752KiB/s (2818kB/s)(26.9MiB/10009msec) 00:31:01.073 slat (usec): min=5, max=118, avg=23.96, stdev=19.00 00:31:01.073 clat (usec): min=12921, max=37518, avg=23037.94, stdev=2102.08 00:31:01.073 lat (usec): min=12927, max=37550, avg=23061.90, stdev=2104.12 00:31:01.073 clat percentiles (usec): 00:31:01.073 | 1.00th=[14222], 5.00th=[18482], 10.00th=[22152], 20.00th=[22938], 00:31:01.073 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23200], 60.00th=[23462], 00:31:01.073 | 70.00th=[23725], 80.00th=[23725], 90.00th=[24249], 95.00th=[24773], 00:31:01.073 | 99.00th=[26084], 99.50th=[31589], 99.90th=[36963], 99.95th=[37487], 00:31:01.073 | 99.99th=[37487] 00:31:01.073 bw ( KiB/s): min= 2560, max= 2944, per=4.21%, avg=2736.74, stdev=100.28, samples=19 00:31:01.073 iops : min= 640, max= 736, avg=684.11, stdev=25.08, samples=19 00:31:01.073 lat (msec) : 20=6.16%, 50=93.84% 00:31:01.073 cpu : usr=97.27%, sys=1.31%, ctx=76, majf=0, minf=21 00:31:01.073 IO depths : 1=5.6%, 2=11.4%, 4=23.7%, 8=52.4%, 16=6.9%, 32=0.0%, >=64=0.0% 00:31:01.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.073 complete : 0=0.0%, 4=93.7%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.073 issued rwts: total=6886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.073 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.073 filename1: (groupid=0, jobs=1): err= 0: pid=2307231: Fri Jul 12 11:09:16 2024 00:31:01.073 read: IOPS=694, BW=2780KiB/s (2846kB/s)(27.2MiB/10004msec) 00:31:01.073 slat (usec): min=5, max=113, avg=21.36, stdev=18.85 00:31:01.073 clat (usec): min=10141, max=40210, avg=22844.23, stdev=3242.39 00:31:01.073 lat (usec): min=10148, max=40216, avg=22865.59, stdev=3244.79 00:31:01.073 clat percentiles (usec): 00:31:01.073 | 1.00th=[13829], 5.00th=[16057], 10.00th=[18220], 20.00th=[22414], 00:31:01.073 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23200], 60.00th=[23462], 00:31:01.073 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[26084], 00:31:01.073 | 99.00th=[34341], 99.50th=[38011], 99.90th=[39584], 99.95th=[40109], 00:31:01.073 | 99.99th=[40109] 00:31:01.073 bw ( KiB/s): min= 2528, max= 3200, per=4.29%, avg=2785.37, stdev=163.36, samples=19 00:31:01.073 iops : min= 632, max= 800, avg=696.32, stdev=40.86, samples=19 00:31:01.073 lat (msec) : 20=13.48%, 50=86.52% 00:31:01.073 cpu : usr=98.27%, sys=1.00%, ctx=86, majf=0, minf=38 00:31:01.073 IO depths : 1=3.9%, 2=8.0%, 4=18.3%, 8=60.3%, 16=9.5%, 32=0.0%, >=64=0.0% 00:31:01.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.073 complete : 0=0.0%, 4=92.7%, 8=2.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.073 issued rwts: total=6952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.073 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.073 filename1: (groupid=0, jobs=1): err= 0: pid=2307232: Fri Jul 12 11:09:16 2024 00:31:01.073 read: IOPS=658, BW=2632KiB/s (2695kB/s)(25.7MiB/10012msec) 00:31:01.073 slat (usec): min=5, max=114, avg=15.69, stdev=13.91 00:31:01.073 clat (usec): min=9682, max=45640, avg=24209.45, stdev=4841.68 00:31:01.073 lat (usec): min=9689, max=45646, avg=24225.14, stdev=4842.59 00:31:01.073 clat percentiles (usec): 00:31:01.073 | 1.00th=[13698], 5.00th=[16188], 10.00th=[18220], 20.00th=[22152], 00:31:01.073 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:31:01.073 | 70.00th=[24249], 80.00th=[27395], 90.00th=[31065], 95.00th=[33817], 00:31:01.073 | 99.00th=[38536], 99.50th=[40633], 99.90th=[40633], 99.95th=[43254], 00:31:01.073 | 99.99th=[45876] 00:31:01.073 bw ( KiB/s): min= 2176, max= 2928, per=4.05%, avg=2631.47, stdev=171.21, samples=19 00:31:01.073 iops : min= 544, max= 732, avg=657.79, stdev=42.81, samples=19 00:31:01.073 lat (msec) : 10=0.06%, 20=14.86%, 50=85.08% 00:31:01.073 cpu : usr=98.84%, sys=0.73%, ctx=103, majf=0, minf=25 00:31:01.073 IO depths : 1=1.0%, 2=3.4%, 4=10.5%, 8=71.3%, 16=13.7%, 32=0.0%, >=64=0.0% 00:31:01.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.073 complete : 0=0.0%, 4=90.7%, 8=5.6%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.073 issued rwts: total=6588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.073 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.073 filename1: (groupid=0, jobs=1): err= 0: pid=2307233: Fri Jul 12 11:09:16 2024 00:31:01.073 read: IOPS=726, BW=2906KiB/s (2976kB/s)(28.4MiB/10013msec) 00:31:01.073 slat (nsec): min=5565, max=58475, avg=8679.45, stdev=4763.77 00:31:01.073 clat (usec): min=8160, max=66001, avg=21948.22, stdev=3974.42 00:31:01.073 lat (usec): min=8166, max=66031, avg=21956.90, stdev=3975.31 00:31:01.073 clat percentiles (usec): 00:31:01.073 | 1.00th=[12387], 5.00th=[14877], 10.00th=[15795], 20.00th=[17957], 00:31:01.073 | 30.00th=[22676], 40.00th=[23200], 50.00th=[23462], 60.00th=[23462], 00:31:01.073 | 70.00th=[23725], 80.00th=[23725], 90.00th=[23987], 95.00th=[24249], 00:31:01.073 | 99.00th=[28967], 99.50th=[31589], 99.90th=[65799], 99.95th=[65799], 00:31:01.073 | 99.99th=[65799] 00:31:01.073 bw ( KiB/s): min= 2432, max= 3888, per=4.47%, avg=2906.35, stdev=386.34, samples=20 00:31:01.073 iops : min= 608, max= 972, avg=726.55, stdev=96.60, samples=20 00:31:01.073 lat (msec) : 10=0.16%, 20=21.83%, 50=77.78%, 100=0.22% 00:31:01.073 cpu : usr=98.91%, sys=0.65%, ctx=139, majf=0, minf=18 00:31:01.073 IO depths : 1=2.7%, 2=7.6%, 4=21.2%, 8=58.6%, 16=9.9%, 32=0.0%, >=64=0.0% 00:31:01.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.073 complete : 0=0.0%, 4=93.2%, 8=1.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.073 issued rwts: total=7274,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.073 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.073 filename1: (groupid=0, jobs=1): err= 0: pid=2307235: Fri Jul 12 11:09:16 2024 00:31:01.073 read: IOPS=676, BW=2705KiB/s (2770kB/s)(26.4MiB/10009msec) 00:31:01.073 slat (usec): min=5, max=111, avg=23.32, stdev=18.83 00:31:01.073 clat (usec): min=16444, max=37290, avg=23471.05, stdev=1022.58 00:31:01.073 lat (usec): min=16450, max=37310, avg=23494.37, stdev=1020.23 00:31:01.073 clat percentiles (usec): 00:31:01.073 | 1.00th=[21627], 5.00th=[22414], 10.00th=[22676], 20.00th=[22938], 00:31:01.073 | 30.00th=[23200], 40.00th=[23200], 50.00th=[23462], 60.00th=[23462], 00:31:01.073 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24773], 00:31:01.073 | 99.00th=[25560], 99.50th=[25822], 99.90th=[37487], 99.95th=[37487], 00:31:01.073 | 99.99th=[37487] 00:31:01.073 bw ( KiB/s): min= 2554, max= 2816, per=4.16%, avg=2700.53, stdev=73.40, samples=19 00:31:01.073 iops : min= 638, max= 704, avg=675.05, stdev=18.42, samples=19 00:31:01.073 lat (msec) : 20=0.47%, 50=99.53% 00:31:01.073 cpu : usr=98.76%, sys=0.66%, ctx=92, majf=0, minf=24 00:31:01.073 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:01.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.073 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.073 issued rwts: total=6768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.073 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.073 filename2: (groupid=0, jobs=1): err= 0: pid=2307236: Fri Jul 12 11:09:16 2024 00:31:01.073 read: IOPS=703, BW=2816KiB/s (2883kB/s)(27.5MiB/10010msec) 00:31:01.073 slat (nsec): min=5574, max=80021, avg=11556.08, stdev=9747.71 00:31:01.073 clat (usec): min=9903, max=35135, avg=22639.41, stdev=2735.42 00:31:01.073 lat (usec): min=9909, max=35151, avg=22650.97, stdev=2737.07 00:31:01.073 clat percentiles (usec): 00:31:01.073 | 1.00th=[13304], 5.00th=[16057], 10.00th=[17695], 20.00th=[22676], 00:31:01.073 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23462], 00:31:01.073 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:31:01.073 | 99.00th=[27395], 99.50th=[31065], 99.90th=[34866], 99.95th=[34866], 00:31:01.073 | 99.99th=[35390] 00:31:01.073 bw ( KiB/s): min= 2666, max= 3568, per=4.34%, avg=2818.21, stdev=262.46, samples=19 00:31:01.073 iops : min= 666, max= 892, avg=704.53, stdev=65.63, samples=19 00:31:01.073 lat (msec) : 10=0.09%, 20=12.12%, 50=87.79% 00:31:01.073 cpu : usr=99.20%, sys=0.47%, ctx=69, majf=0, minf=19 00:31:01.073 IO depths : 1=3.4%, 2=9.0%, 4=22.9%, 8=55.5%, 16=9.1%, 32=0.0%, >=64=0.0% 00:31:01.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.073 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.073 issued rwts: total=7046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.073 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.073 filename2: (groupid=0, jobs=1): err= 0: pid=2307237: Fri Jul 12 11:09:16 2024 00:31:01.073 read: IOPS=675, BW=2701KiB/s (2766kB/s)(26.4MiB/10002msec) 00:31:01.073 slat (nsec): min=5511, max=99063, avg=15478.38, stdev=12762.07 00:31:01.073 clat (usec): min=3966, max=56823, avg=23587.86, stdev=3169.21 00:31:01.073 lat (usec): min=3972, max=56841, avg=23603.34, stdev=3170.18 00:31:01.073 clat percentiles (usec): 00:31:01.073 | 1.00th=[13173], 5.00th=[19006], 10.00th=[22414], 20.00th=[22938], 00:31:01.073 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:31:01.073 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24773], 95.00th=[28705], 00:31:01.073 | 99.00th=[34341], 99.50th=[38536], 99.90th=[47449], 99.95th=[47449], 00:31:01.073 | 99.99th=[56886] 00:31:01.073 bw ( KiB/s): min= 2496, max= 2816, per=4.14%, avg=2686.63, stdev=93.41, samples=19 00:31:01.073 iops : min= 624, max= 704, avg=671.58, stdev=23.37, samples=19 00:31:01.073 lat (msec) : 4=0.04%, 10=0.43%, 20=5.30%, 50=94.20%, 100=0.03% 00:31:01.073 cpu : usr=99.15%, sys=0.54%, ctx=31, majf=0, minf=28 00:31:01.073 IO depths : 1=2.0%, 2=4.8%, 4=12.6%, 8=67.5%, 16=13.0%, 32=0.0%, >=64=0.0% 00:31:01.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.073 complete : 0=0.0%, 4=91.4%, 8=5.2%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.074 issued rwts: total=6755,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.074 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.074 filename2: (groupid=0, jobs=1): err= 0: pid=2307238: Fri Jul 12 11:09:16 2024 00:31:01.074 read: IOPS=681, BW=2725KiB/s (2790kB/s)(26.6MiB/10009msec) 00:31:01.074 slat (usec): min=5, max=120, avg=25.75, stdev=20.99 00:31:01.074 clat (usec): min=9850, max=40743, avg=23254.58, stdev=2771.48 00:31:01.074 lat (usec): min=9859, max=40750, avg=23280.33, stdev=2771.81 00:31:01.074 clat percentiles (usec): 00:31:01.074 | 1.00th=[14746], 5.00th=[18220], 10.00th=[22152], 20.00th=[22676], 00:31:01.074 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23200], 60.00th=[23462], 00:31:01.074 | 70.00th=[23462], 80.00th=[23987], 90.00th=[24511], 95.00th=[25822], 00:31:01.074 | 99.00th=[36439], 99.50th=[38011], 99.90th=[40633], 99.95th=[40633], 00:31:01.074 | 99.99th=[40633] 00:31:01.074 bw ( KiB/s): min= 2560, max= 2896, per=4.19%, avg=2724.11, stdev=91.36, samples=19 00:31:01.074 iops : min= 640, max= 724, avg=680.95, stdev=22.91, samples=19 00:31:01.074 lat (msec) : 10=0.06%, 20=6.82%, 50=93.12% 00:31:01.074 cpu : usr=98.77%, sys=0.79%, ctx=138, majf=0, minf=21 00:31:01.074 IO depths : 1=3.9%, 2=8.3%, 4=19.0%, 8=59.5%, 16=9.3%, 32=0.0%, >=64=0.0% 00:31:01.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.074 complete : 0=0.0%, 4=92.6%, 8=2.3%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.074 issued rwts: total=6818,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.074 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.074 filename2: (groupid=0, jobs=1): err= 0: pid=2307239: Fri Jul 12 11:09:16 2024 00:31:01.074 read: IOPS=689, BW=2759KiB/s (2826kB/s)(27.0MiB/10002msec) 00:31:01.074 slat (usec): min=5, max=100, avg=16.17, stdev=13.44 00:31:01.074 clat (usec): min=5109, max=47387, avg=23063.05, stdev=3195.08 00:31:01.074 lat (usec): min=5115, max=47407, avg=23079.22, stdev=3195.81 00:31:01.074 clat percentiles (usec): 00:31:01.074 | 1.00th=[13435], 5.00th=[17171], 10.00th=[20055], 20.00th=[22676], 00:31:01.074 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23462], 00:31:01.074 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[25560], 00:31:01.074 | 99.00th=[33162], 99.50th=[36963], 99.90th=[47449], 99.95th=[47449], 00:31:01.074 | 99.99th=[47449] 00:31:01.074 bw ( KiB/s): min= 2554, max= 3008, per=4.23%, avg=2744.63, stdev=106.82, samples=19 00:31:01.074 iops : min= 638, max= 752, avg=686.11, stdev=26.77, samples=19 00:31:01.074 lat (msec) : 10=0.49%, 20=9.45%, 50=90.06% 00:31:01.074 cpu : usr=99.22%, sys=0.44%, ctx=17, majf=0, minf=29 00:31:01.074 IO depths : 1=1.6%, 2=6.0%, 4=18.8%, 8=62.0%, 16=11.7%, 32=0.0%, >=64=0.0% 00:31:01.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.074 complete : 0=0.0%, 4=92.7%, 8=2.5%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.074 issued rwts: total=6900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.074 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.074 filename2: (groupid=0, jobs=1): err= 0: pid=2307240: Fri Jul 12 11:09:16 2024 00:31:01.074 read: IOPS=674, BW=2698KiB/s (2763kB/s)(26.4MiB/10006msec) 00:31:01.074 slat (usec): min=5, max=110, avg=17.84, stdev=15.55 00:31:01.074 clat (usec): min=7354, max=44658, avg=23584.87, stdev=3693.26 00:31:01.074 lat (usec): min=7360, max=44665, avg=23602.72, stdev=3693.73 00:31:01.074 clat percentiles (usec): 00:31:01.074 | 1.00th=[12649], 5.00th=[17171], 10.00th=[20579], 20.00th=[22676], 00:31:01.074 | 30.00th=[23200], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:31:01.074 | 70.00th=[23725], 80.00th=[24249], 90.00th=[26084], 95.00th=[30278], 00:31:01.074 | 99.00th=[37487], 99.50th=[39060], 99.90th=[43779], 99.95th=[44827], 00:31:01.074 | 99.99th=[44827] 00:31:01.074 bw ( KiB/s): min= 2512, max= 2816, per=4.14%, avg=2691.37, stdev=75.53, samples=19 00:31:01.074 iops : min= 628, max= 704, avg=672.84, stdev=18.88, samples=19 00:31:01.074 lat (msec) : 10=0.21%, 20=8.37%, 50=91.42% 00:31:01.074 cpu : usr=98.87%, sys=0.79%, ctx=9, majf=0, minf=25 00:31:01.074 IO depths : 1=2.2%, 2=5.0%, 4=15.0%, 8=66.1%, 16=11.7%, 32=0.0%, >=64=0.0% 00:31:01.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.074 complete : 0=0.0%, 4=92.0%, 8=3.4%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.074 issued rwts: total=6750,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.074 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.074 filename2: (groupid=0, jobs=1): err= 0: pid=2307241: Fri Jul 12 11:09:16 2024 00:31:01.074 read: IOPS=666, BW=2666KiB/s (2730kB/s)(26.1MiB/10016msec) 00:31:01.074 slat (usec): min=5, max=114, avg=18.23, stdev=16.01 00:31:01.074 clat (usec): min=10506, max=45247, avg=23848.14, stdev=4485.27 00:31:01.074 lat (usec): min=10512, max=45256, avg=23866.37, stdev=4486.66 00:31:01.074 clat percentiles (usec): 00:31:01.074 | 1.00th=[13698], 5.00th=[15926], 10.00th=[18482], 20.00th=[22414], 00:31:01.074 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:31:01.074 | 70.00th=[23987], 80.00th=[25035], 90.00th=[29492], 95.00th=[32637], 00:31:01.074 | 99.00th=[38536], 99.50th=[39584], 99.90th=[43779], 99.95th=[45351], 00:31:01.074 | 99.99th=[45351] 00:31:01.074 bw ( KiB/s): min= 2464, max= 2880, per=4.11%, avg=2668.50, stdev=101.06, samples=20 00:31:01.074 iops : min= 616, max= 720, avg=667.10, stdev=25.26, samples=20 00:31:01.074 lat (msec) : 20=13.17%, 50=86.83% 00:31:01.074 cpu : usr=98.86%, sys=0.81%, ctx=15, majf=0, minf=23 00:31:01.074 IO depths : 1=2.2%, 2=4.8%, 4=14.8%, 8=67.2%, 16=11.0%, 32=0.0%, >=64=0.0% 00:31:01.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.074 complete : 0=0.0%, 4=91.7%, 8=3.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.074 issued rwts: total=6676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.074 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.074 filename2: (groupid=0, jobs=1): err= 0: pid=2307242: Fri Jul 12 11:09:16 2024 00:31:01.074 read: IOPS=679, BW=2719KiB/s (2784kB/s)(26.6MiB/10004msec) 00:31:01.074 slat (nsec): min=5438, max=76012, avg=18610.51, stdev=13219.86 00:31:01.074 clat (usec): min=4722, max=39344, avg=23372.79, stdev=1923.66 00:31:01.074 lat (usec): min=4728, max=39354, avg=23391.40, stdev=1923.99 00:31:01.074 clat percentiles (usec): 00:31:01.074 | 1.00th=[15401], 5.00th=[22414], 10.00th=[22676], 20.00th=[22938], 00:31:01.074 | 30.00th=[23200], 40.00th=[23200], 50.00th=[23462], 60.00th=[23462], 00:31:01.074 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24773], 00:31:01.074 | 99.00th=[26608], 99.50th=[31065], 99.90th=[38536], 99.95th=[39060], 00:31:01.074 | 99.99th=[39584] 00:31:01.074 bw ( KiB/s): min= 2554, max= 2816, per=4.16%, avg=2700.79, stdev=72.19, samples=19 00:31:01.074 iops : min= 638, max= 704, avg=675.11, stdev=18.12, samples=19 00:31:01.074 lat (msec) : 10=0.50%, 20=1.68%, 50=97.82% 00:31:01.074 cpu : usr=96.40%, sys=1.82%, ctx=103, majf=0, minf=25 00:31:01.074 IO depths : 1=4.0%, 2=10.0%, 4=24.2%, 8=53.2%, 16=8.6%, 32=0.0%, >=64=0.0% 00:31:01.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.074 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.074 issued rwts: total=6800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.074 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.074 filename2: (groupid=0, jobs=1): err= 0: pid=2307244: Fri Jul 12 11:09:16 2024 00:31:01.074 read: IOPS=676, BW=2704KiB/s (2769kB/s)(26.4MiB/10011msec) 00:31:01.074 slat (usec): min=5, max=101, avg=25.35, stdev=17.16 00:31:01.074 clat (usec): min=16552, max=39669, avg=23453.86, stdev=1097.56 00:31:01.074 lat (usec): min=16560, max=39687, avg=23479.22, stdev=1095.55 00:31:01.074 clat percentiles (usec): 00:31:01.074 | 1.00th=[21627], 5.00th=[22414], 10.00th=[22676], 20.00th=[22938], 00:31:01.074 | 30.00th=[23200], 40.00th=[23200], 50.00th=[23462], 60.00th=[23462], 00:31:01.074 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:31:01.074 | 99.00th=[25560], 99.50th=[26084], 99.90th=[39584], 99.95th=[39584], 00:31:01.074 | 99.99th=[39584] 00:31:01.074 bw ( KiB/s): min= 2560, max= 2816, per=4.16%, avg=2701.16, stdev=84.27, samples=19 00:31:01.074 iops : min= 640, max= 704, avg=675.26, stdev=21.07, samples=19 00:31:01.074 lat (msec) : 20=0.47%, 50=99.53% 00:31:01.074 cpu : usr=99.02%, sys=0.60%, ctx=93, majf=0, minf=28 00:31:01.074 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:01.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.074 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.074 issued rwts: total=6768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.074 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.074 00:31:01.074 Run status group 0 (all jobs): 00:31:01.074 READ: bw=63.4MiB/s (66.5MB/s), 2632KiB/s-2906KiB/s (2695kB/s-2976kB/s), io=638MiB (669MB), run=10002-10052msec 00:31:01.074 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:01.074 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:01.074 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:01.074 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:01.074 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:01.074 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:01.074 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.074 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.074 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.074 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:01.074 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.074 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.074 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.074 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:01.074 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:01.074 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:01.074 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:01.074 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.074 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.074 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.074 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:01.074 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.074 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.074 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.074 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:01.074 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:01.074 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:01.074 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:01.074 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.074 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.074 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.075 bdev_null0 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.075 [2024-07-12 11:09:16.657066] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.075 bdev_null1 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:01.075 { 00:31:01.075 "params": { 00:31:01.075 "name": "Nvme$subsystem", 00:31:01.075 "trtype": "$TEST_TRANSPORT", 00:31:01.075 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:01.075 "adrfam": "ipv4", 00:31:01.075 "trsvcid": "$NVMF_PORT", 00:31:01.075 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:01.075 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:01.075 "hdgst": ${hdgst:-false}, 00:31:01.075 "ddgst": ${ddgst:-false} 00:31:01.075 }, 00:31:01.075 "method": "bdev_nvme_attach_controller" 00:31:01.075 } 00:31:01.075 EOF 00:31:01.075 )") 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:01.075 { 00:31:01.075 "params": { 00:31:01.075 "name": "Nvme$subsystem", 00:31:01.075 "trtype": "$TEST_TRANSPORT", 00:31:01.075 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:01.075 "adrfam": "ipv4", 00:31:01.075 "trsvcid": "$NVMF_PORT", 00:31:01.075 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:01.075 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:01.075 "hdgst": ${hdgst:-false}, 00:31:01.075 "ddgst": ${ddgst:-false} 00:31:01.075 }, 00:31:01.075 "method": "bdev_nvme_attach_controller" 00:31:01.075 } 00:31:01.075 EOF 00:31:01.075 )") 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:01.075 11:09:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:01.075 "params": { 00:31:01.075 "name": "Nvme0", 00:31:01.075 "trtype": "tcp", 00:31:01.075 "traddr": "10.0.0.2", 00:31:01.075 "adrfam": "ipv4", 00:31:01.075 "trsvcid": "4420", 00:31:01.075 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:01.075 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:01.075 "hdgst": false, 00:31:01.075 "ddgst": false 00:31:01.075 }, 00:31:01.075 "method": "bdev_nvme_attach_controller" 00:31:01.075 },{ 00:31:01.075 "params": { 00:31:01.075 "name": "Nvme1", 00:31:01.075 "trtype": "tcp", 00:31:01.075 "traddr": "10.0.0.2", 00:31:01.076 "adrfam": "ipv4", 00:31:01.076 "trsvcid": "4420", 00:31:01.076 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:01.076 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:01.076 "hdgst": false, 00:31:01.076 "ddgst": false 00:31:01.076 }, 00:31:01.076 "method": "bdev_nvme_attach_controller" 00:31:01.076 }' 00:31:01.076 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:01.076 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:01.076 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:01.076 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:01.076 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:01.076 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:01.076 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:01.076 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:01.076 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:01.076 11:09:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:01.076 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:01.076 ... 00:31:01.076 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:01.076 ... 00:31:01.076 fio-3.35 00:31:01.076 Starting 4 threads 00:31:01.076 EAL: No free 2048 kB hugepages reported on node 1 00:31:06.382 00:31:06.382 filename0: (groupid=0, jobs=1): err= 0: pid=2310098: Fri Jul 12 11:09:22 2024 00:31:06.382 read: IOPS=2947, BW=23.0MiB/s (24.1MB/s)(115MiB/5002msec) 00:31:06.382 slat (nsec): min=5390, max=50726, avg=6800.38, stdev=2833.07 00:31:06.382 clat (usec): min=1069, max=5008, avg=2695.31, stdev=483.87 00:31:06.382 lat (usec): min=1088, max=5013, avg=2702.11, stdev=483.78 00:31:06.382 clat percentiles (usec): 00:31:06.382 | 1.00th=[ 1745], 5.00th=[ 2008], 10.00th=[ 2114], 20.00th=[ 2311], 00:31:06.382 | 30.00th=[ 2409], 40.00th=[ 2540], 50.00th=[ 2638], 60.00th=[ 2737], 00:31:06.382 | 70.00th=[ 2900], 80.00th=[ 3064], 90.00th=[ 3359], 95.00th=[ 3589], 00:31:06.382 | 99.00th=[ 4015], 99.50th=[ 4228], 99.90th=[ 4490], 99.95th=[ 4555], 00:31:06.382 | 99.99th=[ 4883] 00:31:06.382 bw ( KiB/s): min=23248, max=24304, per=25.55%, avg=23614.22, stdev=327.07, samples=9 00:31:06.382 iops : min= 2906, max= 3038, avg=2951.78, stdev=40.88, samples=9 00:31:06.382 lat (msec) : 2=4.86%, 4=93.97%, 10=1.17% 00:31:06.382 cpu : usr=96.32%, sys=3.42%, ctx=15, majf=0, minf=118 00:31:06.382 IO depths : 1=0.5%, 2=2.4%, 4=68.3%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:06.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.382 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.382 issued rwts: total=14742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:06.382 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:06.382 filename0: (groupid=0, jobs=1): err= 0: pid=2310099: Fri Jul 12 11:09:22 2024 00:31:06.382 read: IOPS=2849, BW=22.3MiB/s (23.3MB/s)(111MiB/5001msec) 00:31:06.382 slat (nsec): min=5385, max=77475, avg=7142.34, stdev=2965.56 00:31:06.382 clat (usec): min=1053, max=5024, avg=2787.60, stdev=512.59 00:31:06.382 lat (usec): min=1061, max=5029, avg=2794.74, stdev=512.66 00:31:06.382 clat percentiles (usec): 00:31:06.382 | 1.00th=[ 1811], 5.00th=[ 2040], 10.00th=[ 2180], 20.00th=[ 2343], 00:31:06.382 | 30.00th=[ 2507], 40.00th=[ 2638], 50.00th=[ 2737], 60.00th=[ 2835], 00:31:06.382 | 70.00th=[ 2999], 80.00th=[ 3195], 90.00th=[ 3490], 95.00th=[ 3752], 00:31:06.382 | 99.00th=[ 4178], 99.50th=[ 4359], 99.90th=[ 4686], 99.95th=[ 4752], 00:31:06.382 | 99.99th=[ 5014] 00:31:06.382 bw ( KiB/s): min=22608, max=23136, per=24.74%, avg=22874.67, stdev=184.69, samples=9 00:31:06.382 iops : min= 2826, max= 2892, avg=2859.33, stdev=23.09, samples=9 00:31:06.382 lat (msec) : 2=3.80%, 4=94.06%, 10=2.14% 00:31:06.382 cpu : usr=97.10%, sys=2.62%, ctx=12, majf=0, minf=88 00:31:06.382 IO depths : 1=0.3%, 2=1.9%, 4=69.4%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:06.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.382 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.382 issued rwts: total=14251,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:06.382 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:06.382 filename1: (groupid=0, jobs=1): err= 0: pid=2310100: Fri Jul 12 11:09:22 2024 00:31:06.382 read: IOPS=2944, BW=23.0MiB/s (24.1MB/s)(115MiB/5002msec) 00:31:06.383 slat (nsec): min=5385, max=53089, avg=8975.38, stdev=3204.21 00:31:06.383 clat (usec): min=1091, max=44260, avg=2692.37, stdev=1084.11 00:31:06.383 lat (usec): min=1099, max=44287, avg=2701.34, stdev=1084.21 00:31:06.383 clat percentiles (usec): 00:31:06.383 | 1.00th=[ 1713], 5.00th=[ 1958], 10.00th=[ 2114], 20.00th=[ 2278], 00:31:06.383 | 30.00th=[ 2409], 40.00th=[ 2507], 50.00th=[ 2638], 60.00th=[ 2737], 00:31:06.383 | 70.00th=[ 2868], 80.00th=[ 3064], 90.00th=[ 3326], 95.00th=[ 3556], 00:31:06.383 | 99.00th=[ 4047], 99.50th=[ 4228], 99.90th=[ 4883], 99.95th=[44303], 00:31:06.383 | 99.99th=[44303] 00:31:06.383 bw ( KiB/s): min=22064, max=24144, per=25.41%, avg=23491.56, stdev=630.16, samples=9 00:31:06.383 iops : min= 2758, max= 3018, avg=2936.44, stdev=78.77, samples=9 00:31:06.383 lat (msec) : 2=5.95%, 4=92.80%, 10=1.20%, 50=0.05% 00:31:06.383 cpu : usr=96.38%, sys=3.28%, ctx=16, majf=0, minf=58 00:31:06.383 IO depths : 1=0.4%, 2=2.1%, 4=68.0%, 8=29.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:06.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.383 complete : 0=0.0%, 4=93.8%, 8=6.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.383 issued rwts: total=14729,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:06.383 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:06.383 filename1: (groupid=0, jobs=1): err= 0: pid=2310101: Fri Jul 12 11:09:22 2024 00:31:06.383 read: IOPS=2814, BW=22.0MiB/s (23.1MB/s)(110MiB/5001msec) 00:31:06.383 slat (nsec): min=7858, max=72355, avg=9159.61, stdev=3340.29 00:31:06.383 clat (usec): min=1148, max=5476, avg=2816.34, stdev=517.24 00:31:06.383 lat (usec): min=1156, max=5484, avg=2825.50, stdev=517.30 00:31:06.383 clat percentiles (usec): 00:31:06.383 | 1.00th=[ 1795], 5.00th=[ 2073], 10.00th=[ 2212], 20.00th=[ 2376], 00:31:06.383 | 30.00th=[ 2507], 40.00th=[ 2638], 50.00th=[ 2737], 60.00th=[ 2900], 00:31:06.383 | 70.00th=[ 3032], 80.00th=[ 3228], 90.00th=[ 3523], 95.00th=[ 3752], 00:31:06.383 | 99.00th=[ 4228], 99.50th=[ 4359], 99.90th=[ 4752], 99.95th=[ 4817], 00:31:06.383 | 99.99th=[ 5473] 00:31:06.383 bw ( KiB/s): min=22224, max=22832, per=24.38%, avg=22533.00, stdev=232.42, samples=9 00:31:06.383 iops : min= 2778, max= 2854, avg=2816.56, stdev=28.97, samples=9 00:31:06.383 lat (msec) : 2=3.57%, 4=94.09%, 10=2.34% 00:31:06.383 cpu : usr=96.78%, sys=2.92%, ctx=9, majf=0, minf=80 00:31:06.383 IO depths : 1=0.8%, 2=2.8%, 4=69.2%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:06.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.383 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.383 issued rwts: total=14076,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:06.383 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:06.383 00:31:06.383 Run status group 0 (all jobs): 00:31:06.383 READ: bw=90.3MiB/s (94.7MB/s), 22.0MiB/s-23.0MiB/s (23.1MB/s-24.1MB/s), io=452MiB (473MB), run=5001-5002msec 00:31:06.383 11:09:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:06.383 11:09:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:06.383 11:09:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:06.383 11:09:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:06.383 11:09:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:06.383 11:09:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:06.383 11:09:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.383 11:09:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:06.383 11:09:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.383 11:09:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:06.383 11:09:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.383 11:09:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:06.383 11:09:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.383 11:09:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:06.383 11:09:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:06.383 11:09:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:06.383 11:09:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:06.383 11:09:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.383 11:09:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:06.383 11:09:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.383 11:09:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:06.383 11:09:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.383 11:09:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:06.383 11:09:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.383 00:31:06.383 real 0m24.438s 00:31:06.383 user 5m16.809s 00:31:06.383 sys 0m4.560s 00:31:06.383 11:09:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:06.383 11:09:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:06.383 ************************************ 00:31:06.383 END TEST fio_dif_rand_params 00:31:06.383 ************************************ 00:31:06.383 11:09:22 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:06.383 11:09:22 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:06.383 11:09:22 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:06.383 11:09:22 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:06.383 11:09:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:06.383 ************************************ 00:31:06.383 START TEST fio_dif_digest 00:31:06.383 ************************************ 00:31:06.383 11:09:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:31:06.383 11:09:22 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:06.383 11:09:22 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:06.383 11:09:22 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:06.383 11:09:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:06.383 11:09:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:06.383 11:09:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:06.383 11:09:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:06.383 11:09:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:06.383 11:09:22 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:06.383 11:09:22 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:06.383 11:09:22 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:06.383 11:09:22 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:06.383 11:09:22 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:06.383 11:09:22 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:06.383 11:09:22 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:06.383 11:09:22 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:06.383 11:09:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.383 11:09:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:06.383 bdev_null0 00:31:06.383 11:09:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.383 11:09:22 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:06.383 11:09:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.383 11:09:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:06.383 11:09:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.383 11:09:23 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:06.383 11:09:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.383 11:09:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:06.383 11:09:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.383 11:09:23 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:06.383 11:09:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.383 11:09:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:06.384 [2024-07-12 11:09:23.020415] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:06.384 { 00:31:06.384 "params": { 00:31:06.384 "name": "Nvme$subsystem", 00:31:06.384 "trtype": "$TEST_TRANSPORT", 00:31:06.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:06.384 "adrfam": "ipv4", 00:31:06.384 "trsvcid": "$NVMF_PORT", 00:31:06.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:06.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:06.384 "hdgst": ${hdgst:-false}, 00:31:06.384 "ddgst": ${ddgst:-false} 00:31:06.384 }, 00:31:06.384 "method": "bdev_nvme_attach_controller" 00:31:06.384 } 00:31:06.384 EOF 00:31:06.384 )") 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:06.384 "params": { 00:31:06.384 "name": "Nvme0", 00:31:06.384 "trtype": "tcp", 00:31:06.384 "traddr": "10.0.0.2", 00:31:06.384 "adrfam": "ipv4", 00:31:06.384 "trsvcid": "4420", 00:31:06.384 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:06.384 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:06.384 "hdgst": true, 00:31:06.384 "ddgst": true 00:31:06.384 }, 00:31:06.384 "method": "bdev_nvme_attach_controller" 00:31:06.384 }' 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:06.384 11:09:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:06.647 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:06.647 ... 00:31:06.647 fio-3.35 00:31:06.647 Starting 3 threads 00:31:06.647 EAL: No free 2048 kB hugepages reported on node 1 00:31:18.875 00:31:18.875 filename0: (groupid=0, jobs=1): err= 0: pid=2311350: Fri Jul 12 11:09:34 2024 00:31:18.875 read: IOPS=252, BW=31.6MiB/s (33.2MB/s)(317MiB/10010msec) 00:31:18.875 slat (nsec): min=5629, max=34289, avg=7266.82, stdev=1651.02 00:31:18.875 clat (usec): min=6292, max=93319, avg=11850.30, stdev=8086.30 00:31:18.875 lat (usec): min=6300, max=93326, avg=11857.56, stdev=8086.29 00:31:18.875 clat percentiles (usec): 00:31:18.875 | 1.00th=[ 7242], 5.00th=[ 8225], 10.00th=[ 8848], 20.00th=[ 9503], 00:31:18.875 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[10814], 00:31:18.875 | 70.00th=[11076], 80.00th=[11469], 90.00th=[11994], 95.00th=[12649], 00:31:18.875 | 99.00th=[52167], 99.50th=[53216], 99.90th=[92799], 99.95th=[92799], 00:31:18.875 | 99.99th=[92799] 00:31:18.875 bw ( KiB/s): min=24576, max=36352, per=31.43%, avg=32371.20, stdev=2909.94, samples=20 00:31:18.875 iops : min= 192, max= 284, avg=252.90, stdev=22.73, samples=20 00:31:18.875 lat (msec) : 10=34.16%, 20=62.28%, 50=0.32%, 100=3.24% 00:31:18.875 cpu : usr=95.22%, sys=4.50%, ctx=17, majf=0, minf=142 00:31:18.875 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:18.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.875 issued rwts: total=2532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.875 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:18.875 filename0: (groupid=0, jobs=1): err= 0: pid=2311351: Fri Jul 12 11:09:34 2024 00:31:18.875 read: IOPS=282, BW=35.4MiB/s (37.1MB/s)(355MiB/10047msec) 00:31:18.875 slat (nsec): min=5795, max=32912, avg=7404.14, stdev=1689.37 00:31:18.875 clat (usec): min=5589, max=92887, avg=10577.05, stdev=4787.25 00:31:18.875 lat (usec): min=5598, max=92895, avg=10584.45, stdev=4787.12 00:31:18.875 clat percentiles (usec): 00:31:18.875 | 1.00th=[ 6915], 5.00th=[ 7832], 10.00th=[ 8225], 20.00th=[ 8717], 00:31:18.875 | 30.00th=[ 9110], 40.00th=[ 9634], 50.00th=[10159], 60.00th=[10683], 00:31:18.875 | 70.00th=[11076], 80.00th=[11469], 90.00th=[11994], 95.00th=[12518], 00:31:18.875 | 99.00th=[49546], 99.50th=[52167], 99.90th=[53740], 99.95th=[54264], 00:31:18.875 | 99.99th=[92799] 00:31:18.875 bw ( KiB/s): min=30976, max=45312, per=35.30%, avg=36364.80, stdev=3712.09, samples=20 00:31:18.875 iops : min= 242, max= 354, avg=284.10, stdev=29.00, samples=20 00:31:18.875 lat (msec) : 10=46.61%, 20=52.30%, 50=0.11%, 100=0.98% 00:31:18.875 cpu : usr=94.81%, sys=4.91%, ctx=23, majf=0, minf=106 00:31:18.875 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:18.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.875 issued rwts: total=2843,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.875 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:18.875 filename0: (groupid=0, jobs=1): err= 0: pid=2311352: Fri Jul 12 11:09:34 2024 00:31:18.875 read: IOPS=269, BW=33.7MiB/s (35.4MB/s)(339MiB/10044msec) 00:31:18.875 slat (nsec): min=5817, max=35160, avg=7341.00, stdev=1458.91 00:31:18.875 clat (usec): min=5990, max=91715, avg=11078.42, stdev=5010.30 00:31:18.875 lat (usec): min=5996, max=91721, avg=11085.76, stdev=5010.40 00:31:18.875 clat percentiles (usec): 00:31:18.875 | 1.00th=[ 7177], 5.00th=[ 8094], 10.00th=[ 8586], 20.00th=[ 9372], 00:31:18.875 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[10683], 60.00th=[11076], 00:31:18.875 | 70.00th=[11338], 80.00th=[11731], 90.00th=[12256], 95.00th=[12911], 00:31:18.875 | 99.00th=[50594], 99.50th=[51643], 99.90th=[54264], 99.95th=[90702], 00:31:18.875 | 99.99th=[91751] 00:31:18.875 bw ( KiB/s): min=26880, max=38400, per=33.66%, avg=34675.20, stdev=3000.98, samples=20 00:31:18.875 iops : min= 210, max= 300, avg=270.90, stdev=23.45, samples=20 00:31:18.875 lat (msec) : 10=30.96%, 20=67.86%, 50=0.11%, 100=1.07% 00:31:18.875 cpu : usr=94.60%, sys=5.12%, ctx=25, majf=0, minf=148 00:31:18.875 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:18.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.875 issued rwts: total=2710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.876 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:18.876 00:31:18.876 Run status group 0 (all jobs): 00:31:18.876 READ: bw=101MiB/s (105MB/s), 31.6MiB/s-35.4MiB/s (33.2MB/s-37.1MB/s), io=1011MiB (1060MB), run=10010-10047msec 00:31:18.876 11:09:34 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:18.876 11:09:34 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:18.876 11:09:34 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:18.876 11:09:34 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:18.876 11:09:34 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:18.876 11:09:34 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:18.876 11:09:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.876 11:09:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:18.876 11:09:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.876 11:09:34 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:18.876 11:09:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.876 11:09:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:18.876 11:09:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.876 00:31:18.876 real 0m11.192s 00:31:18.876 user 0m44.680s 00:31:18.876 sys 0m1.760s 00:31:18.876 11:09:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:18.876 11:09:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:18.876 ************************************ 00:31:18.876 END TEST fio_dif_digest 00:31:18.876 ************************************ 00:31:18.876 11:09:34 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:18.876 11:09:34 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:18.876 11:09:34 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:18.876 11:09:34 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:18.876 11:09:34 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:31:18.876 11:09:34 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:18.876 11:09:34 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:31:18.876 11:09:34 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:18.876 11:09:34 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:18.876 rmmod nvme_tcp 00:31:18.876 rmmod nvme_fabrics 00:31:18.876 rmmod nvme_keyring 00:31:18.876 11:09:34 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:18.876 11:09:34 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:31:18.876 11:09:34 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:31:18.876 11:09:34 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2300559 ']' 00:31:18.876 11:09:34 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2300559 00:31:18.876 11:09:34 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 2300559 ']' 00:31:18.876 11:09:34 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 2300559 00:31:18.876 11:09:34 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:31:18.876 11:09:34 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:18.876 11:09:34 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2300559 00:31:18.876 11:09:34 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:18.876 11:09:34 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:18.876 11:09:34 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2300559' 00:31:18.876 killing process with pid 2300559 00:31:18.876 11:09:34 nvmf_dif -- common/autotest_common.sh@967 -- # kill 2300559 00:31:18.876 11:09:34 nvmf_dif -- common/autotest_common.sh@972 -- # wait 2300559 00:31:18.876 11:09:34 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:18.876 11:09:34 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:20.792 Waiting for block devices as requested 00:31:20.792 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:21.052 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:21.052 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:21.052 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:21.313 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:21.313 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:21.313 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:21.313 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:21.574 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:21.574 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:21.834 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:21.834 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:21.834 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:22.095 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:22.095 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:22.095 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:22.355 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:22.616 11:09:39 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:22.616 11:09:39 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:22.616 11:09:39 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:22.616 11:09:39 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:22.616 11:09:39 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.616 11:09:39 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:22.616 11:09:39 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:24.529 11:09:41 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:24.529 00:31:24.529 real 1m17.835s 00:31:24.529 user 7m57.206s 00:31:24.529 sys 0m20.875s 00:31:24.529 11:09:41 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:24.529 11:09:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:24.529 ************************************ 00:31:24.529 END TEST nvmf_dif 00:31:24.529 ************************************ 00:31:24.790 11:09:41 -- common/autotest_common.sh@1142 -- # return 0 00:31:24.790 11:09:41 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:24.790 11:09:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:24.790 11:09:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:24.790 11:09:41 -- common/autotest_common.sh@10 -- # set +x 00:31:24.790 ************************************ 00:31:24.790 START TEST nvmf_abort_qd_sizes 00:31:24.790 ************************************ 00:31:24.790 11:09:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:24.790 * Looking for test storage... 00:31:24.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:24.790 11:09:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:24.790 11:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:24.790 11:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:24.790 11:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:24.790 11:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:24.790 11:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:24.790 11:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:24.790 11:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:24.790 11:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:24.790 11:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:24.790 11:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:24.790 11:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:24.790 11:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:24.790 11:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:24.790 11:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:24.790 11:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:24.791 11:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:24.791 11:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:24.791 11:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:24.791 11:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:24.791 11:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:24.791 11:09:41 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:24.791 11:09:41 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.791 11:09:41 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.791 11:09:41 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.791 11:09:41 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:24.791 11:09:41 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.791 11:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:31:24.791 11:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:24.791 11:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:24.791 11:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:24.791 11:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:24.791 11:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:24.791 11:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:24.791 11:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:24.791 11:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:24.791 11:09:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:24.791 11:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:24.791 11:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:24.791 11:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:24.791 11:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:24.791 11:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:24.791 11:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:24.791 11:09:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:24.791 11:09:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:24.791 11:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:24.791 11:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:24.791 11:09:41 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:31:24.791 11:09:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:32.946 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:32.947 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:32.947 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:32.947 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:32.947 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:32.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:32.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.777 ms 00:31:32.947 00:31:32.947 --- 10.0.0.2 ping statistics --- 00:31:32.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.947 rtt min/avg/max/mdev = 0.777/0.777/0.777/0.000 ms 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:32.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:32.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.348 ms 00:31:32.947 00:31:32.947 --- 10.0.0.1 ping statistics --- 00:31:32.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.947 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:32.947 11:09:48 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:35.492 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:35.492 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:35.492 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:35.492 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:35.492 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:35.492 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:35.492 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:35.492 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:35.492 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:35.492 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:35.492 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:35.492 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:35.492 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:35.492 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:35.492 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:35.492 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:35.492 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:31:35.753 11:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:35.753 11:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:35.753 11:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:35.753 11:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:35.753 11:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:35.753 11:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:36.047 11:09:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:36.048 11:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:36.048 11:09:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:36.048 11:09:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:36.048 11:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2320721 00:31:36.048 11:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2320721 00:31:36.048 11:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:36.048 11:09:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 2320721 ']' 00:31:36.048 11:09:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:36.048 11:09:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:36.048 11:09:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:36.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:36.048 11:09:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:36.048 11:09:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:36.048 [2024-07-12 11:09:52.822691] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:31:36.048 [2024-07-12 11:09:52.822736] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:36.048 EAL: No free 2048 kB hugepages reported on node 1 00:31:36.048 [2024-07-12 11:09:52.881386] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:36.048 [2024-07-12 11:09:52.951194] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:36.048 [2024-07-12 11:09:52.951247] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:36.048 [2024-07-12 11:09:52.951257] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:36.048 [2024-07-12 11:09:52.951264] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:36.048 [2024-07-12 11:09:52.951270] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:36.048 [2024-07-12 11:09:52.951438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:36.048 [2024-07-12 11:09:52.951599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:36.048 [2024-07-12 11:09:52.951733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.048 [2024-07-12 11:09:52.951734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:36.308 11:09:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:36.308 11:09:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:31:36.308 11:09:53 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:36.308 11:09:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:36.308 11:09:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:36.308 11:09:53 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:36.308 11:09:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:36.308 11:09:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:36.308 11:09:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:36.308 11:09:53 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:31:36.308 11:09:53 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:31:36.308 11:09:53 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:31:36.308 11:09:53 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:31:36.308 11:09:53 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:31:36.308 11:09:53 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:31:36.308 11:09:53 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:31:36.308 11:09:53 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:31:36.308 11:09:53 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:31:36.308 11:09:53 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:31:36.308 11:09:53 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:31:36.308 11:09:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:31:36.308 11:09:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:31:36.308 11:09:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:36.308 11:09:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:36.308 11:09:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:36.308 11:09:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:36.308 ************************************ 00:31:36.308 START TEST spdk_target_abort 00:31:36.308 ************************************ 00:31:36.308 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:31:36.308 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:36.308 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:31:36.308 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.308 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:36.569 spdk_targetn1 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:36.569 [2024-07-12 11:09:53.486855] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:36.569 [2024-07-12 11:09:53.527265] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:36.569 11:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:36.829 EAL: No free 2048 kB hugepages reported on node 1 00:31:36.829 [2024-07-12 11:09:53.653923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:272 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:31:36.829 [2024-07-12 11:09:53.653973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0023 p:1 m:0 dnr:0 00:31:36.829 [2024-07-12 11:09:53.660681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:408 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:31:36.829 [2024-07-12 11:09:53.660711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0035 p:1 m:0 dnr:0 00:31:36.829 [2024-07-12 11:09:53.676776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:824 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:31:36.829 [2024-07-12 11:09:53.676806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0068 p:1 m:0 dnr:0 00:31:36.829 [2024-07-12 11:09:53.724625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2072 len:8 PRP1 0x2000078be000 PRP2 0x0 00:31:36.829 [2024-07-12 11:09:53.724658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:36.829 [2024-07-12 11:09:53.732652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2296 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:31:36.829 [2024-07-12 11:09:53.732679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:36.829 [2024-07-12 11:09:53.749514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2800 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:31:36.829 [2024-07-12 11:09:53.749543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:36.830 [2024-07-12 11:09:53.756264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2992 len:8 PRP1 0x2000078be000 PRP2 0x0 00:31:36.830 [2024-07-12 11:09:53.756298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:40.127 Initializing NVMe Controllers 00:31:40.127 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:40.127 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:40.127 Initialization complete. Launching workers. 00:31:40.127 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9696, failed: 7 00:31:40.127 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3060, failed to submit 6643 00:31:40.127 success 731, unsuccess 2329, failed 0 00:31:40.127 11:09:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:40.127 11:09:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:40.127 EAL: No free 2048 kB hugepages reported on node 1 00:31:40.127 [2024-07-12 11:09:56.850208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:174 nsid:1 lba:304 len:8 PRP1 0x200007c58000 PRP2 0x0 00:31:40.127 [2024-07-12 11:09:56.850254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:174 cdw0:0 sqhd:0035 p:1 m:0 dnr:0 00:31:40.127 [2024-07-12 11:09:56.858245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:190 nsid:1 lba:480 len:8 PRP1 0x200007c46000 PRP2 0x0 00:31:40.127 [2024-07-12 11:09:56.858273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:190 cdw0:0 sqhd:004d p:1 m:0 dnr:0 00:31:40.127 [2024-07-12 11:09:56.929298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:176 nsid:1 lba:2216 len:8 PRP1 0x200007c50000 PRP2 0x0 00:31:40.127 [2024-07-12 11:09:56.929325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:176 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:40.698 [2024-07-12 11:09:57.435841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:183 nsid:1 lba:14176 len:8 PRP1 0x200007c58000 PRP2 0x0 00:31:40.698 [2024-07-12 11:09:57.435871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:183 cdw0:0 sqhd:00f6 p:1 m:0 dnr:0 00:31:42.683 [2024-07-12 11:09:59.513862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:189 nsid:1 lba:62304 len:8 PRP1 0x200007c4e000 PRP2 0x0 00:31:42.683 [2024-07-12 11:09:59.513887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:189 cdw0:0 sqhd:0078 p:1 m:0 dnr:0 00:31:43.254 Initializing NVMe Controllers 00:31:43.254 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:43.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:43.254 Initialization complete. Launching workers. 00:31:43.254 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8687, failed: 5 00:31:43.254 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1222, failed to submit 7470 00:31:43.254 success 344, unsuccess 878, failed 0 00:31:43.254 11:09:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:43.254 11:09:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:43.254 EAL: No free 2048 kB hugepages reported on node 1 00:31:43.515 [2024-07-12 11:10:00.465810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:160 nsid:1 lba:23808 len:8 PRP1 0x20000791a000 PRP2 0x0 00:31:43.515 [2024-07-12 11:10:00.465867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:160 cdw0:0 sqhd:0088 p:0 m:0 dnr:0 00:31:44.086 [2024-07-12 11:10:00.933718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:144 nsid:1 lba:78528 len:8 PRP1 0x20000792a000 PRP2 0x0 00:31:44.086 [2024-07-12 11:10:00.933746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:144 cdw0:0 sqhd:004a p:1 m:0 dnr:0 00:31:46.629 Initializing NVMe Controllers 00:31:46.629 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:46.629 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:46.629 Initialization complete. Launching workers. 00:31:46.629 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 44129, failed: 2 00:31:46.629 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2638, failed to submit 41493 00:31:46.629 success 576, unsuccess 2062, failed 0 00:31:46.629 11:10:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:31:46.629 11:10:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.629 11:10:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:46.629 11:10:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.629 11:10:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:46.629 11:10:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.629 11:10:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:48.544 11:10:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.544 11:10:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2320721 00:31:48.544 11:10:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 2320721 ']' 00:31:48.544 11:10:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 2320721 00:31:48.544 11:10:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:31:48.544 11:10:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:48.544 11:10:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2320721 00:31:48.544 11:10:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:48.544 11:10:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:48.544 11:10:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2320721' 00:31:48.544 killing process with pid 2320721 00:31:48.544 11:10:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 2320721 00:31:48.544 11:10:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 2320721 00:31:48.544 00:31:48.544 real 0m12.157s 00:31:48.544 user 0m47.182s 00:31:48.544 sys 0m2.062s 00:31:48.544 11:10:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:48.544 11:10:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:48.544 ************************************ 00:31:48.544 END TEST spdk_target_abort 00:31:48.544 ************************************ 00:31:48.544 11:10:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:31:48.544 11:10:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:31:48.544 11:10:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:48.544 11:10:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:48.544 11:10:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:48.544 ************************************ 00:31:48.544 START TEST kernel_target_abort 00:31:48.544 ************************************ 00:31:48.544 11:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:31:48.545 11:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:31:48.545 11:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:31:48.545 11:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:48.545 11:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:48.545 11:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.545 11:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.545 11:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:48.545 11:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.545 11:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:48.545 11:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:48.545 11:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:48.545 11:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:48.545 11:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:48.545 11:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:48.545 11:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:48.545 11:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:48.545 11:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:48.545 11:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:31:48.545 11:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:48.545 11:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:48.545 11:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:48.545 11:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:51.850 Waiting for block devices as requested 00:31:51.850 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:52.110 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:52.110 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:52.110 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:52.372 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:52.372 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:52.372 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:52.634 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:52.634 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:52.894 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:52.894 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:52.894 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:53.156 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:53.156 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:53.156 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:53.156 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:53.417 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:53.678 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:53.678 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:53.678 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:53.678 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:53.678 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:53.678 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:53.678 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:53.678 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:53.678 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:53.678 No valid GPT data, bailing 00:31:53.678 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:53.678 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:31:53.678 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:31:53.678 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:53.678 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:53.678 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:53.678 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:53.678 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:53.678 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:53.678 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:31:53.678 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:53.678 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:31:53.678 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:53.678 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:31:53.678 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:31:53.678 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:31:53.678 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:53.678 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:31:53.939 00:31:53.939 Discovery Log Number of Records 2, Generation counter 2 00:31:53.939 =====Discovery Log Entry 0====== 00:31:53.939 trtype: tcp 00:31:53.939 adrfam: ipv4 00:31:53.939 subtype: current discovery subsystem 00:31:53.939 treq: not specified, sq flow control disable supported 00:31:53.939 portid: 1 00:31:53.939 trsvcid: 4420 00:31:53.939 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:53.939 traddr: 10.0.0.1 00:31:53.939 eflags: none 00:31:53.939 sectype: none 00:31:53.939 =====Discovery Log Entry 1====== 00:31:53.939 trtype: tcp 00:31:53.939 adrfam: ipv4 00:31:53.939 subtype: nvme subsystem 00:31:53.939 treq: not specified, sq flow control disable supported 00:31:53.939 portid: 1 00:31:53.939 trsvcid: 4420 00:31:53.939 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:53.939 traddr: 10.0.0.1 00:31:53.939 eflags: none 00:31:53.939 sectype: none 00:31:53.939 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:31:53.939 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:53.939 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:53.939 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:31:53.939 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:53.939 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:53.939 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:53.939 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:53.939 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:53.939 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:53.939 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:53.939 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:53.939 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:53.939 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:53.939 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:31:53.939 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:53.939 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:31:53.939 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:53.939 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:53.939 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:53.939 11:10:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:53.939 EAL: No free 2048 kB hugepages reported on node 1 00:31:57.246 Initializing NVMe Controllers 00:31:57.246 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:57.246 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:57.246 Initialization complete. Launching workers. 00:31:57.246 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 53752, failed: 0 00:31:57.246 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 53752, failed to submit 0 00:31:57.246 success 0, unsuccess 53752, failed 0 00:31:57.246 11:10:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:57.246 11:10:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:57.246 EAL: No free 2048 kB hugepages reported on node 1 00:32:00.550 Initializing NVMe Controllers 00:32:00.550 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:00.550 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:00.550 Initialization complete. Launching workers. 00:32:00.550 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 94042, failed: 0 00:32:00.550 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23702, failed to submit 70340 00:32:00.550 success 0, unsuccess 23702, failed 0 00:32:00.550 11:10:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:00.550 11:10:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:00.551 EAL: No free 2048 kB hugepages reported on node 1 00:32:03.095 Initializing NVMe Controllers 00:32:03.095 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:03.095 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:03.095 Initialization complete. Launching workers. 00:32:03.095 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 127770, failed: 0 00:32:03.095 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31962, failed to submit 95808 00:32:03.095 success 0, unsuccess 31962, failed 0 00:32:03.096 11:10:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:03.096 11:10:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:03.096 11:10:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:32:03.096 11:10:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:03.096 11:10:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:03.096 11:10:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:03.096 11:10:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:03.096 11:10:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:03.096 11:10:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:03.358 11:10:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:06.659 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:06.659 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:06.659 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:06.659 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:06.659 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:06.659 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:06.659 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:06.659 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:06.659 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:06.659 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:06.659 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:06.659 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:06.920 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:06.920 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:06.920 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:06.920 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:08.891 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:08.891 00:32:08.891 real 0m20.324s 00:32:08.891 user 0m8.643s 00:32:08.891 sys 0m6.384s 00:32:08.891 11:10:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:08.891 11:10:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:08.891 ************************************ 00:32:08.891 END TEST kernel_target_abort 00:32:08.891 ************************************ 00:32:08.891 11:10:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:32:08.891 11:10:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:08.891 11:10:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:08.891 11:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:08.891 11:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:32:08.891 11:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:08.891 11:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:32:08.891 11:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:08.891 11:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:08.891 rmmod nvme_tcp 00:32:08.891 rmmod nvme_fabrics 00:32:08.891 rmmod nvme_keyring 00:32:08.891 11:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:08.891 11:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:32:08.891 11:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:32:08.891 11:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2320721 ']' 00:32:08.891 11:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2320721 00:32:08.891 11:10:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 2320721 ']' 00:32:08.891 11:10:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 2320721 00:32:08.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2320721) - No such process 00:32:08.891 11:10:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 2320721 is not found' 00:32:08.891 Process with pid 2320721 is not found 00:32:08.891 11:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:08.891 11:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:12.195 Waiting for block devices as requested 00:32:12.457 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:12.457 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:12.457 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:12.719 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:12.719 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:12.719 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:12.719 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:12.981 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:12.981 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:13.243 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:13.243 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:13.505 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:13.505 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:13.505 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:13.505 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:13.766 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:13.766 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:14.027 11:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:14.027 11:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:14.027 11:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:14.027 11:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:14.027 11:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:14.027 11:10:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:14.027 11:10:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:16.573 11:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:16.573 00:32:16.573 real 0m51.481s 00:32:16.573 user 1m1.065s 00:32:16.573 sys 0m19.103s 00:32:16.573 11:10:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:16.573 11:10:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:16.573 ************************************ 00:32:16.573 END TEST nvmf_abort_qd_sizes 00:32:16.573 ************************************ 00:32:16.573 11:10:33 -- common/autotest_common.sh@1142 -- # return 0 00:32:16.573 11:10:33 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:16.573 11:10:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:16.573 11:10:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:16.573 11:10:33 -- common/autotest_common.sh@10 -- # set +x 00:32:16.573 ************************************ 00:32:16.573 START TEST keyring_file 00:32:16.573 ************************************ 00:32:16.574 11:10:33 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:16.574 * Looking for test storage... 00:32:16.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:16.574 11:10:33 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:16.574 11:10:33 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:16.574 11:10:33 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:16.574 11:10:33 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:16.574 11:10:33 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:16.574 11:10:33 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.574 11:10:33 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.574 11:10:33 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.574 11:10:33 keyring_file -- paths/export.sh@5 -- # export PATH 00:32:16.574 11:10:33 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@47 -- # : 0 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:16.574 11:10:33 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:16.574 11:10:33 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:16.574 11:10:33 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:16.574 11:10:33 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:16.574 11:10:33 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:16.574 11:10:33 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:16.574 11:10:33 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:16.574 11:10:33 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:16.574 11:10:33 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:16.574 11:10:33 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:16.574 11:10:33 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:16.574 11:10:33 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:16.574 11:10:33 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.kC1fg7rN81 00:32:16.574 11:10:33 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:16.574 11:10:33 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.kC1fg7rN81 00:32:16.574 11:10:33 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.kC1fg7rN81 00:32:16.574 11:10:33 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.kC1fg7rN81 00:32:16.574 11:10:33 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:16.574 11:10:33 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:16.574 11:10:33 keyring_file -- keyring/common.sh@17 -- # name=key1 00:32:16.574 11:10:33 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:16.574 11:10:33 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:16.574 11:10:33 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:16.574 11:10:33 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.3zb03yH98e 00:32:16.574 11:10:33 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:16.574 11:10:33 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:16.574 11:10:33 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.3zb03yH98e 00:32:16.574 11:10:33 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.3zb03yH98e 00:32:16.574 11:10:33 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.3zb03yH98e 00:32:16.574 11:10:33 keyring_file -- keyring/file.sh@30 -- # tgtpid=2330862 00:32:16.574 11:10:33 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2330862 00:32:16.574 11:10:33 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:16.574 11:10:33 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2330862 ']' 00:32:16.574 11:10:33 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:16.574 11:10:33 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:16.574 11:10:33 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:16.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:16.574 11:10:33 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:16.574 11:10:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:16.574 [2024-07-12 11:10:33.460486] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:32:16.574 [2024-07-12 11:10:33.460560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2330862 ] 00:32:16.575 EAL: No free 2048 kB hugepages reported on node 1 00:32:16.575 [2024-07-12 11:10:33.541522] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.835 [2024-07-12 11:10:33.637208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:17.407 11:10:34 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:17.407 11:10:34 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:17.407 11:10:34 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:32:17.407 11:10:34 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.407 11:10:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:17.407 [2024-07-12 11:10:34.252064] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:17.407 null0 00:32:17.407 [2024-07-12 11:10:34.284089] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:17.407 [2024-07-12 11:10:34.284431] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:17.407 [2024-07-12 11:10:34.292104] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:17.407 11:10:34 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.407 11:10:34 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:17.407 11:10:34 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:17.407 11:10:34 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:17.407 11:10:34 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:17.407 11:10:34 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:17.407 11:10:34 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:17.407 11:10:34 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:17.407 11:10:34 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:17.407 11:10:34 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.407 11:10:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:17.407 [2024-07-12 11:10:34.308149] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:32:17.407 request: 00:32:17.407 { 00:32:17.407 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:17.407 "secure_channel": false, 00:32:17.407 "listen_address": { 00:32:17.407 "trtype": "tcp", 00:32:17.407 "traddr": "127.0.0.1", 00:32:17.407 "trsvcid": "4420" 00:32:17.407 }, 00:32:17.407 "method": "nvmf_subsystem_add_listener", 00:32:17.407 "req_id": 1 00:32:17.407 } 00:32:17.407 Got JSON-RPC error response 00:32:17.407 response: 00:32:17.407 { 00:32:17.407 "code": -32602, 00:32:17.407 "message": "Invalid parameters" 00:32:17.407 } 00:32:17.407 11:10:34 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:17.407 11:10:34 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:17.407 11:10:34 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:17.407 11:10:34 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:17.407 11:10:34 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:17.407 11:10:34 keyring_file -- keyring/file.sh@46 -- # bperfpid=2330952 00:32:17.407 11:10:34 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2330952 /var/tmp/bperf.sock 00:32:17.407 11:10:34 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2330952 ']' 00:32:17.407 11:10:34 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:17.407 11:10:34 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:17.407 11:10:34 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:17.408 11:10:34 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:17.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:17.408 11:10:34 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:17.408 11:10:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:17.408 [2024-07-12 11:10:34.367237] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:32:17.408 [2024-07-12 11:10:34.367302] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2330952 ] 00:32:17.668 EAL: No free 2048 kB hugepages reported on node 1 00:32:17.668 [2024-07-12 11:10:34.447375] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:17.668 [2024-07-12 11:10:34.541825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:18.253 11:10:35 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:18.253 11:10:35 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:18.253 11:10:35 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.kC1fg7rN81 00:32:18.253 11:10:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.kC1fg7rN81 00:32:18.520 11:10:35 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.3zb03yH98e 00:32:18.520 11:10:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.3zb03yH98e 00:32:18.520 11:10:35 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:32:18.520 11:10:35 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:32:18.520 11:10:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:18.520 11:10:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:18.520 11:10:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:18.781 11:10:35 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.kC1fg7rN81 == \/\t\m\p\/\t\m\p\.\k\C\1\f\g\7\r\N\8\1 ]] 00:32:18.781 11:10:35 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:32:18.781 11:10:35 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:18.781 11:10:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:18.781 11:10:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:18.781 11:10:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:19.041 11:10:35 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.3zb03yH98e == \/\t\m\p\/\t\m\p\.\3\z\b\0\3\y\H\9\8\e ]] 00:32:19.041 11:10:35 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:32:19.041 11:10:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:19.041 11:10:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:19.041 11:10:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:19.041 11:10:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:19.041 11:10:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:19.301 11:10:36 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:32:19.301 11:10:36 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:32:19.301 11:10:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:19.301 11:10:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:19.301 11:10:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:19.301 11:10:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:19.301 11:10:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:19.301 11:10:36 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:19.301 11:10:36 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:19.301 11:10:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:19.562 [2024-07-12 11:10:36.394920] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:19.562 nvme0n1 00:32:19.562 11:10:36 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:32:19.562 11:10:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:19.562 11:10:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:19.562 11:10:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:19.562 11:10:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:19.562 11:10:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:19.823 11:10:36 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:32:19.823 11:10:36 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:32:19.823 11:10:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:19.823 11:10:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:19.823 11:10:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:19.823 11:10:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:19.823 11:10:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:20.083 11:10:36 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:32:20.083 11:10:36 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:20.083 Running I/O for 1 seconds... 00:32:21.023 00:32:21.023 Latency(us) 00:32:21.023 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:21.023 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:21.023 nvme0n1 : 1.01 9692.54 37.86 0.00 0.00 13142.25 3932.16 18786.99 00:32:21.023 =================================================================================================================== 00:32:21.023 Total : 9692.54 37.86 0.00 0.00 13142.25 3932.16 18786.99 00:32:21.023 0 00:32:21.023 11:10:37 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:21.023 11:10:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:21.284 11:10:38 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:32:21.284 11:10:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:21.284 11:10:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:21.284 11:10:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:21.284 11:10:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:21.284 11:10:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:21.545 11:10:38 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:32:21.545 11:10:38 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:32:21.545 11:10:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:21.545 11:10:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:21.545 11:10:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:21.545 11:10:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:21.545 11:10:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:21.545 11:10:38 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:21.545 11:10:38 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:21.545 11:10:38 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:21.545 11:10:38 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:21.545 11:10:38 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:21.545 11:10:38 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:21.545 11:10:38 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:21.545 11:10:38 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:21.545 11:10:38 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:21.545 11:10:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:21.805 [2024-07-12 11:10:38.612604] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:21.805 [2024-07-12 11:10:38.613162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd99d0 (107): Transport endpoint is not connected 00:32:21.805 [2024-07-12 11:10:38.614157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd99d0 (9): Bad file descriptor 00:32:21.805 [2024-07-12 11:10:38.615159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:21.805 [2024-07-12 11:10:38.615166] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:21.805 [2024-07-12 11:10:38.615172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:21.805 request: 00:32:21.805 { 00:32:21.805 "name": "nvme0", 00:32:21.805 "trtype": "tcp", 00:32:21.805 "traddr": "127.0.0.1", 00:32:21.805 "adrfam": "ipv4", 00:32:21.805 "trsvcid": "4420", 00:32:21.805 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:21.805 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:21.805 "prchk_reftag": false, 00:32:21.805 "prchk_guard": false, 00:32:21.805 "hdgst": false, 00:32:21.805 "ddgst": false, 00:32:21.805 "psk": "key1", 00:32:21.805 "method": "bdev_nvme_attach_controller", 00:32:21.805 "req_id": 1 00:32:21.805 } 00:32:21.805 Got JSON-RPC error response 00:32:21.805 response: 00:32:21.805 { 00:32:21.805 "code": -5, 00:32:21.805 "message": "Input/output error" 00:32:21.805 } 00:32:21.805 11:10:38 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:21.805 11:10:38 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:21.805 11:10:38 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:21.805 11:10:38 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:21.805 11:10:38 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:32:21.805 11:10:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:21.805 11:10:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:21.805 11:10:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:21.805 11:10:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:21.805 11:10:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:22.065 11:10:38 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:32:22.065 11:10:38 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:32:22.065 11:10:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:22.065 11:10:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:22.065 11:10:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:22.065 11:10:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:22.065 11:10:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:22.065 11:10:38 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:22.065 11:10:38 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:32:22.065 11:10:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:22.326 11:10:39 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:32:22.326 11:10:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:22.326 11:10:39 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:32:22.326 11:10:39 keyring_file -- keyring/file.sh@77 -- # jq length 00:32:22.326 11:10:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:22.586 11:10:39 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:32:22.586 11:10:39 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.kC1fg7rN81 00:32:22.586 11:10:39 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.kC1fg7rN81 00:32:22.586 11:10:39 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:22.586 11:10:39 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.kC1fg7rN81 00:32:22.586 11:10:39 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:22.586 11:10:39 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:22.586 11:10:39 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:22.586 11:10:39 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:22.586 11:10:39 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.kC1fg7rN81 00:32:22.586 11:10:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.kC1fg7rN81 00:32:22.847 [2024-07-12 11:10:39.597970] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.kC1fg7rN81': 0100660 00:32:22.848 [2024-07-12 11:10:39.597988] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:22.848 request: 00:32:22.848 { 00:32:22.848 "name": "key0", 00:32:22.848 "path": "/tmp/tmp.kC1fg7rN81", 00:32:22.848 "method": "keyring_file_add_key", 00:32:22.848 "req_id": 1 00:32:22.848 } 00:32:22.848 Got JSON-RPC error response 00:32:22.848 response: 00:32:22.848 { 00:32:22.848 "code": -1, 00:32:22.848 "message": "Operation not permitted" 00:32:22.848 } 00:32:22.848 11:10:39 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:22.848 11:10:39 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:22.848 11:10:39 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:22.848 11:10:39 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:22.848 11:10:39 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.kC1fg7rN81 00:32:22.848 11:10:39 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.kC1fg7rN81 00:32:22.848 11:10:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.kC1fg7rN81 00:32:22.848 11:10:39 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.kC1fg7rN81 00:32:22.848 11:10:39 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:32:22.848 11:10:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:22.848 11:10:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:22.848 11:10:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:22.848 11:10:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:22.848 11:10:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:23.108 11:10:39 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:32:23.108 11:10:39 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:23.108 11:10:39 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:23.108 11:10:39 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:23.108 11:10:39 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:23.108 11:10:39 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:23.108 11:10:39 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:23.108 11:10:39 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:23.108 11:10:39 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:23.108 11:10:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:23.108 [2024-07-12 11:10:40.083216] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.kC1fg7rN81': No such file or directory 00:32:23.108 [2024-07-12 11:10:40.083241] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:23.108 [2024-07-12 11:10:40.083259] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:23.108 [2024-07-12 11:10:40.083264] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:23.108 [2024-07-12 11:10:40.083269] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:23.108 request: 00:32:23.108 { 00:32:23.108 "name": "nvme0", 00:32:23.108 "trtype": "tcp", 00:32:23.108 "traddr": "127.0.0.1", 00:32:23.108 "adrfam": "ipv4", 00:32:23.108 "trsvcid": "4420", 00:32:23.108 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:23.108 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:23.108 "prchk_reftag": false, 00:32:23.108 "prchk_guard": false, 00:32:23.108 "hdgst": false, 00:32:23.108 "ddgst": false, 00:32:23.108 "psk": "key0", 00:32:23.108 "method": "bdev_nvme_attach_controller", 00:32:23.108 "req_id": 1 00:32:23.108 } 00:32:23.108 Got JSON-RPC error response 00:32:23.108 response: 00:32:23.108 { 00:32:23.108 "code": -19, 00:32:23.108 "message": "No such device" 00:32:23.108 } 00:32:23.368 11:10:40 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:23.368 11:10:40 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:23.368 11:10:40 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:23.368 11:10:40 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:23.368 11:10:40 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:32:23.368 11:10:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:23.368 11:10:40 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:23.368 11:10:40 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:23.368 11:10:40 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:23.368 11:10:40 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:23.368 11:10:40 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:23.368 11:10:40 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:23.368 11:10:40 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.oWBUwQSvHa 00:32:23.368 11:10:40 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:23.368 11:10:40 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:23.368 11:10:40 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:23.368 11:10:40 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:23.368 11:10:40 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:23.368 11:10:40 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:23.368 11:10:40 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:23.368 11:10:40 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.oWBUwQSvHa 00:32:23.368 11:10:40 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.oWBUwQSvHa 00:32:23.368 11:10:40 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.oWBUwQSvHa 00:32:23.368 11:10:40 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.oWBUwQSvHa 00:32:23.368 11:10:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.oWBUwQSvHa 00:32:23.629 11:10:40 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:23.629 11:10:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:23.891 nvme0n1 00:32:23.891 11:10:40 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:32:23.891 11:10:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:23.891 11:10:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:23.891 11:10:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:23.891 11:10:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:23.891 11:10:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:23.891 11:10:40 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:32:23.891 11:10:40 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:32:23.891 11:10:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:24.152 11:10:40 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:32:24.152 11:10:40 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:32:24.152 11:10:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:24.152 11:10:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:24.152 11:10:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:24.412 11:10:41 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:32:24.412 11:10:41 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:32:24.412 11:10:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:24.412 11:10:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:24.412 11:10:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:24.412 11:10:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:24.412 11:10:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:24.412 11:10:41 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:32:24.412 11:10:41 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:24.412 11:10:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:24.673 11:10:41 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:32:24.673 11:10:41 keyring_file -- keyring/file.sh@104 -- # jq length 00:32:24.673 11:10:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:24.933 11:10:41 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:32:24.933 11:10:41 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.oWBUwQSvHa 00:32:24.933 11:10:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.oWBUwQSvHa 00:32:24.933 11:10:41 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.3zb03yH98e 00:32:24.933 11:10:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.3zb03yH98e 00:32:25.194 11:10:41 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:25.194 11:10:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:25.456 nvme0n1 00:32:25.456 11:10:42 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:32:25.456 11:10:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:25.717 11:10:42 keyring_file -- keyring/file.sh@112 -- # config='{ 00:32:25.717 "subsystems": [ 00:32:25.717 { 00:32:25.717 "subsystem": "keyring", 00:32:25.717 "config": [ 00:32:25.717 { 00:32:25.717 "method": "keyring_file_add_key", 00:32:25.717 "params": { 00:32:25.717 "name": "key0", 00:32:25.717 "path": "/tmp/tmp.oWBUwQSvHa" 00:32:25.717 } 00:32:25.717 }, 00:32:25.717 { 00:32:25.717 "method": "keyring_file_add_key", 00:32:25.717 "params": { 00:32:25.717 "name": "key1", 00:32:25.717 "path": "/tmp/tmp.3zb03yH98e" 00:32:25.717 } 00:32:25.717 } 00:32:25.717 ] 00:32:25.717 }, 00:32:25.717 { 00:32:25.717 "subsystem": "iobuf", 00:32:25.717 "config": [ 00:32:25.717 { 00:32:25.717 "method": "iobuf_set_options", 00:32:25.717 "params": { 00:32:25.717 "small_pool_count": 8192, 00:32:25.717 "large_pool_count": 1024, 00:32:25.717 "small_bufsize": 8192, 00:32:25.717 "large_bufsize": 135168 00:32:25.717 } 00:32:25.717 } 00:32:25.717 ] 00:32:25.717 }, 00:32:25.717 { 00:32:25.717 "subsystem": "sock", 00:32:25.717 "config": [ 00:32:25.717 { 00:32:25.717 "method": "sock_set_default_impl", 00:32:25.717 "params": { 00:32:25.717 "impl_name": "posix" 00:32:25.717 } 00:32:25.717 }, 00:32:25.717 { 00:32:25.717 "method": "sock_impl_set_options", 00:32:25.717 "params": { 00:32:25.717 "impl_name": "ssl", 00:32:25.717 "recv_buf_size": 4096, 00:32:25.717 "send_buf_size": 4096, 00:32:25.717 "enable_recv_pipe": true, 00:32:25.717 "enable_quickack": false, 00:32:25.717 "enable_placement_id": 0, 00:32:25.717 "enable_zerocopy_send_server": true, 00:32:25.717 "enable_zerocopy_send_client": false, 00:32:25.717 "zerocopy_threshold": 0, 00:32:25.717 "tls_version": 0, 00:32:25.717 "enable_ktls": false 00:32:25.717 } 00:32:25.717 }, 00:32:25.717 { 00:32:25.717 "method": "sock_impl_set_options", 00:32:25.717 "params": { 00:32:25.717 "impl_name": "posix", 00:32:25.717 "recv_buf_size": 2097152, 00:32:25.717 "send_buf_size": 2097152, 00:32:25.717 "enable_recv_pipe": true, 00:32:25.717 "enable_quickack": false, 00:32:25.717 "enable_placement_id": 0, 00:32:25.717 "enable_zerocopy_send_server": true, 00:32:25.717 "enable_zerocopy_send_client": false, 00:32:25.717 "zerocopy_threshold": 0, 00:32:25.717 "tls_version": 0, 00:32:25.717 "enable_ktls": false 00:32:25.717 } 00:32:25.717 } 00:32:25.717 ] 00:32:25.717 }, 00:32:25.717 { 00:32:25.717 "subsystem": "vmd", 00:32:25.717 "config": [] 00:32:25.717 }, 00:32:25.717 { 00:32:25.717 "subsystem": "accel", 00:32:25.717 "config": [ 00:32:25.717 { 00:32:25.717 "method": "accel_set_options", 00:32:25.717 "params": { 00:32:25.717 "small_cache_size": 128, 00:32:25.717 "large_cache_size": 16, 00:32:25.717 "task_count": 2048, 00:32:25.717 "sequence_count": 2048, 00:32:25.717 "buf_count": 2048 00:32:25.717 } 00:32:25.717 } 00:32:25.717 ] 00:32:25.717 }, 00:32:25.717 { 00:32:25.717 "subsystem": "bdev", 00:32:25.717 "config": [ 00:32:25.717 { 00:32:25.717 "method": "bdev_set_options", 00:32:25.717 "params": { 00:32:25.717 "bdev_io_pool_size": 65535, 00:32:25.717 "bdev_io_cache_size": 256, 00:32:25.717 "bdev_auto_examine": true, 00:32:25.717 "iobuf_small_cache_size": 128, 00:32:25.717 "iobuf_large_cache_size": 16 00:32:25.717 } 00:32:25.717 }, 00:32:25.717 { 00:32:25.717 "method": "bdev_raid_set_options", 00:32:25.717 "params": { 00:32:25.717 "process_window_size_kb": 1024 00:32:25.717 } 00:32:25.717 }, 00:32:25.717 { 00:32:25.717 "method": "bdev_iscsi_set_options", 00:32:25.717 "params": { 00:32:25.717 "timeout_sec": 30 00:32:25.717 } 00:32:25.717 }, 00:32:25.717 { 00:32:25.717 "method": "bdev_nvme_set_options", 00:32:25.717 "params": { 00:32:25.717 "action_on_timeout": "none", 00:32:25.717 "timeout_us": 0, 00:32:25.717 "timeout_admin_us": 0, 00:32:25.717 "keep_alive_timeout_ms": 10000, 00:32:25.717 "arbitration_burst": 0, 00:32:25.717 "low_priority_weight": 0, 00:32:25.717 "medium_priority_weight": 0, 00:32:25.717 "high_priority_weight": 0, 00:32:25.717 "nvme_adminq_poll_period_us": 10000, 00:32:25.717 "nvme_ioq_poll_period_us": 0, 00:32:25.717 "io_queue_requests": 512, 00:32:25.717 "delay_cmd_submit": true, 00:32:25.717 "transport_retry_count": 4, 00:32:25.717 "bdev_retry_count": 3, 00:32:25.717 "transport_ack_timeout": 0, 00:32:25.717 "ctrlr_loss_timeout_sec": 0, 00:32:25.717 "reconnect_delay_sec": 0, 00:32:25.717 "fast_io_fail_timeout_sec": 0, 00:32:25.717 "disable_auto_failback": false, 00:32:25.717 "generate_uuids": false, 00:32:25.717 "transport_tos": 0, 00:32:25.717 "nvme_error_stat": false, 00:32:25.717 "rdma_srq_size": 0, 00:32:25.717 "io_path_stat": false, 00:32:25.717 "allow_accel_sequence": false, 00:32:25.717 "rdma_max_cq_size": 0, 00:32:25.717 "rdma_cm_event_timeout_ms": 0, 00:32:25.717 "dhchap_digests": [ 00:32:25.717 "sha256", 00:32:25.717 "sha384", 00:32:25.717 "sha512" 00:32:25.717 ], 00:32:25.717 "dhchap_dhgroups": [ 00:32:25.717 "null", 00:32:25.717 "ffdhe2048", 00:32:25.717 "ffdhe3072", 00:32:25.717 "ffdhe4096", 00:32:25.717 "ffdhe6144", 00:32:25.717 "ffdhe8192" 00:32:25.717 ] 00:32:25.717 } 00:32:25.718 }, 00:32:25.718 { 00:32:25.718 "method": "bdev_nvme_attach_controller", 00:32:25.718 "params": { 00:32:25.718 "name": "nvme0", 00:32:25.718 "trtype": "TCP", 00:32:25.718 "adrfam": "IPv4", 00:32:25.718 "traddr": "127.0.0.1", 00:32:25.718 "trsvcid": "4420", 00:32:25.718 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:25.718 "prchk_reftag": false, 00:32:25.718 "prchk_guard": false, 00:32:25.718 "ctrlr_loss_timeout_sec": 0, 00:32:25.718 "reconnect_delay_sec": 0, 00:32:25.718 "fast_io_fail_timeout_sec": 0, 00:32:25.718 "psk": "key0", 00:32:25.718 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:25.718 "hdgst": false, 00:32:25.718 "ddgst": false 00:32:25.718 } 00:32:25.718 }, 00:32:25.718 { 00:32:25.718 "method": "bdev_nvme_set_hotplug", 00:32:25.718 "params": { 00:32:25.718 "period_us": 100000, 00:32:25.718 "enable": false 00:32:25.718 } 00:32:25.718 }, 00:32:25.718 { 00:32:25.718 "method": "bdev_wait_for_examine" 00:32:25.718 } 00:32:25.718 ] 00:32:25.718 }, 00:32:25.718 { 00:32:25.718 "subsystem": "nbd", 00:32:25.718 "config": [] 00:32:25.718 } 00:32:25.718 ] 00:32:25.718 }' 00:32:25.718 11:10:42 keyring_file -- keyring/file.sh@114 -- # killprocess 2330952 00:32:25.718 11:10:42 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2330952 ']' 00:32:25.718 11:10:42 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2330952 00:32:25.718 11:10:42 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:25.718 11:10:42 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:25.718 11:10:42 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2330952 00:32:25.718 11:10:42 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:25.718 11:10:42 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:25.718 11:10:42 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2330952' 00:32:25.718 killing process with pid 2330952 00:32:25.718 11:10:42 keyring_file -- common/autotest_common.sh@967 -- # kill 2330952 00:32:25.718 Received shutdown signal, test time was about 1.000000 seconds 00:32:25.718 00:32:25.718 Latency(us) 00:32:25.718 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:25.718 =================================================================================================================== 00:32:25.718 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:25.718 11:10:42 keyring_file -- common/autotest_common.sh@972 -- # wait 2330952 00:32:25.718 11:10:42 keyring_file -- keyring/file.sh@117 -- # bperfpid=2332760 00:32:25.718 11:10:42 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2332760 /var/tmp/bperf.sock 00:32:25.718 11:10:42 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2332760 ']' 00:32:25.718 11:10:42 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:25.718 11:10:42 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:25.718 11:10:42 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:25.718 11:10:42 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:25.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:25.718 11:10:42 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:25.718 11:10:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:25.718 11:10:42 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:32:25.718 "subsystems": [ 00:32:25.718 { 00:32:25.718 "subsystem": "keyring", 00:32:25.718 "config": [ 00:32:25.718 { 00:32:25.718 "method": "keyring_file_add_key", 00:32:25.718 "params": { 00:32:25.718 "name": "key0", 00:32:25.718 "path": "/tmp/tmp.oWBUwQSvHa" 00:32:25.718 } 00:32:25.718 }, 00:32:25.718 { 00:32:25.718 "method": "keyring_file_add_key", 00:32:25.718 "params": { 00:32:25.718 "name": "key1", 00:32:25.718 "path": "/tmp/tmp.3zb03yH98e" 00:32:25.718 } 00:32:25.718 } 00:32:25.718 ] 00:32:25.718 }, 00:32:25.718 { 00:32:25.718 "subsystem": "iobuf", 00:32:25.718 "config": [ 00:32:25.718 { 00:32:25.718 "method": "iobuf_set_options", 00:32:25.718 "params": { 00:32:25.718 "small_pool_count": 8192, 00:32:25.718 "large_pool_count": 1024, 00:32:25.718 "small_bufsize": 8192, 00:32:25.718 "large_bufsize": 135168 00:32:25.718 } 00:32:25.718 } 00:32:25.718 ] 00:32:25.718 }, 00:32:25.718 { 00:32:25.718 "subsystem": "sock", 00:32:25.718 "config": [ 00:32:25.718 { 00:32:25.718 "method": "sock_set_default_impl", 00:32:25.718 "params": { 00:32:25.718 "impl_name": "posix" 00:32:25.718 } 00:32:25.718 }, 00:32:25.718 { 00:32:25.718 "method": "sock_impl_set_options", 00:32:25.718 "params": { 00:32:25.718 "impl_name": "ssl", 00:32:25.718 "recv_buf_size": 4096, 00:32:25.718 "send_buf_size": 4096, 00:32:25.718 "enable_recv_pipe": true, 00:32:25.718 "enable_quickack": false, 00:32:25.718 "enable_placement_id": 0, 00:32:25.718 "enable_zerocopy_send_server": true, 00:32:25.718 "enable_zerocopy_send_client": false, 00:32:25.718 "zerocopy_threshold": 0, 00:32:25.718 "tls_version": 0, 00:32:25.718 "enable_ktls": false 00:32:25.718 } 00:32:25.718 }, 00:32:25.718 { 00:32:25.718 "method": "sock_impl_set_options", 00:32:25.718 "params": { 00:32:25.718 "impl_name": "posix", 00:32:25.718 "recv_buf_size": 2097152, 00:32:25.718 "send_buf_size": 2097152, 00:32:25.718 "enable_recv_pipe": true, 00:32:25.718 "enable_quickack": false, 00:32:25.718 "enable_placement_id": 0, 00:32:25.718 "enable_zerocopy_send_server": true, 00:32:25.718 "enable_zerocopy_send_client": false, 00:32:25.718 "zerocopy_threshold": 0, 00:32:25.718 "tls_version": 0, 00:32:25.718 "enable_ktls": false 00:32:25.718 } 00:32:25.718 } 00:32:25.718 ] 00:32:25.718 }, 00:32:25.718 { 00:32:25.718 "subsystem": "vmd", 00:32:25.718 "config": [] 00:32:25.718 }, 00:32:25.718 { 00:32:25.718 "subsystem": "accel", 00:32:25.718 "config": [ 00:32:25.718 { 00:32:25.718 "method": "accel_set_options", 00:32:25.718 "params": { 00:32:25.718 "small_cache_size": 128, 00:32:25.718 "large_cache_size": 16, 00:32:25.718 "task_count": 2048, 00:32:25.718 "sequence_count": 2048, 00:32:25.718 "buf_count": 2048 00:32:25.718 } 00:32:25.718 } 00:32:25.718 ] 00:32:25.718 }, 00:32:25.718 { 00:32:25.718 "subsystem": "bdev", 00:32:25.718 "config": [ 00:32:25.718 { 00:32:25.718 "method": "bdev_set_options", 00:32:25.718 "params": { 00:32:25.718 "bdev_io_pool_size": 65535, 00:32:25.718 "bdev_io_cache_size": 256, 00:32:25.718 "bdev_auto_examine": true, 00:32:25.718 "iobuf_small_cache_size": 128, 00:32:25.718 "iobuf_large_cache_size": 16 00:32:25.718 } 00:32:25.718 }, 00:32:25.718 { 00:32:25.718 "method": "bdev_raid_set_options", 00:32:25.718 "params": { 00:32:25.718 "process_window_size_kb": 1024 00:32:25.718 } 00:32:25.718 }, 00:32:25.718 { 00:32:25.718 "method": "bdev_iscsi_set_options", 00:32:25.718 "params": { 00:32:25.718 "timeout_sec": 30 00:32:25.718 } 00:32:25.718 }, 00:32:25.718 { 00:32:25.718 "method": "bdev_nvme_set_options", 00:32:25.718 "params": { 00:32:25.718 "action_on_timeout": "none", 00:32:25.718 "timeout_us": 0, 00:32:25.718 "timeout_admin_us": 0, 00:32:25.718 "keep_alive_timeout_ms": 10000, 00:32:25.718 "arbitration_burst": 0, 00:32:25.718 "low_priority_weight": 0, 00:32:25.718 "medium_priority_weight": 0, 00:32:25.718 "high_priority_weight": 0, 00:32:25.718 "nvme_adminq_poll_period_us": 10000, 00:32:25.718 "nvme_ioq_poll_period_us": 0, 00:32:25.718 "io_queue_requests": 512, 00:32:25.718 "delay_cmd_submit": true, 00:32:25.718 "transport_retry_count": 4, 00:32:25.718 "bdev_retry_count": 3, 00:32:25.718 "transport_ack_timeout": 0, 00:32:25.718 "ctrlr_loss_timeout_sec": 0, 00:32:25.718 "reconnect_delay_sec": 0, 00:32:25.718 "fast_io_fail_timeout_sec": 0, 00:32:25.718 "disable_auto_failback": false, 00:32:25.718 "generate_uuids": false, 00:32:25.718 "transport_tos": 0, 00:32:25.718 "nvme_error_stat": false, 00:32:25.718 "rdma_srq_size": 0, 00:32:25.718 "io_path_stat": false, 00:32:25.718 "allow_accel_sequence": false, 00:32:25.718 "rdma_max_cq_size": 0, 00:32:25.719 "rdma_cm_event_timeout_ms": 0, 00:32:25.719 "dhchap_digests": [ 00:32:25.719 "sha256", 00:32:25.719 "sha384", 00:32:25.719 "sha512" 00:32:25.719 ], 00:32:25.719 "dhchap_dhgroups": [ 00:32:25.719 "null", 00:32:25.719 "ffdhe2048", 00:32:25.719 "ffdhe3072", 00:32:25.719 "ffdhe4096", 00:32:25.719 "ffdhe6144", 00:32:25.719 "ffdhe8192" 00:32:25.719 ] 00:32:25.719 } 00:32:25.719 }, 00:32:25.719 { 00:32:25.719 "method": "bdev_nvme_attach_controller", 00:32:25.719 "params": { 00:32:25.719 "name": "nvme0", 00:32:25.719 "trtype": "TCP", 00:32:25.719 "adrfam": "IPv4", 00:32:25.719 "traddr": "127.0.0.1", 00:32:25.719 "trsvcid": "4420", 00:32:25.719 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:25.719 "prchk_reftag": false, 00:32:25.719 "prchk_guard": false, 00:32:25.719 "ctrlr_loss_timeout_sec": 0, 00:32:25.719 "reconnect_delay_sec": 0, 00:32:25.719 "fast_io_fail_timeout_sec": 0, 00:32:25.719 "psk": "key0", 00:32:25.719 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:25.719 "hdgst": false, 00:32:25.719 "ddgst": false 00:32:25.719 } 00:32:25.719 }, 00:32:25.719 { 00:32:25.719 "method": "bdev_nvme_set_hotplug", 00:32:25.719 "params": { 00:32:25.719 "period_us": 100000, 00:32:25.719 "enable": false 00:32:25.719 } 00:32:25.719 }, 00:32:25.719 { 00:32:25.719 "method": "bdev_wait_for_examine" 00:32:25.719 } 00:32:25.719 ] 00:32:25.719 }, 00:32:25.719 { 00:32:25.719 "subsystem": "nbd", 00:32:25.719 "config": [] 00:32:25.719 } 00:32:25.719 ] 00:32:25.719 }' 00:32:25.719 [2024-07-12 11:10:42.672081] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:32:25.719 [2024-07-12 11:10:42.672139] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2332760 ] 00:32:25.719 EAL: No free 2048 kB hugepages reported on node 1 00:32:25.979 [2024-07-12 11:10:42.744186] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:25.979 [2024-07-12 11:10:42.797806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:25.979 [2024-07-12 11:10:42.939154] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:26.550 11:10:43 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:26.550 11:10:43 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:26.550 11:10:43 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:32:26.550 11:10:43 keyring_file -- keyring/file.sh@120 -- # jq length 00:32:26.550 11:10:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:26.811 11:10:43 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:32:26.811 11:10:43 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:32:26.811 11:10:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:26.811 11:10:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:26.811 11:10:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:26.811 11:10:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:26.811 11:10:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:26.811 11:10:43 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:26.811 11:10:43 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:32:26.811 11:10:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:26.811 11:10:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:26.811 11:10:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:26.811 11:10:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:26.811 11:10:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:27.072 11:10:43 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:32:27.072 11:10:43 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:32:27.072 11:10:43 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:32:27.072 11:10:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:27.333 11:10:44 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:32:27.333 11:10:44 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:27.333 11:10:44 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.oWBUwQSvHa /tmp/tmp.3zb03yH98e 00:32:27.333 11:10:44 keyring_file -- keyring/file.sh@20 -- # killprocess 2332760 00:32:27.333 11:10:44 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2332760 ']' 00:32:27.333 11:10:44 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2332760 00:32:27.333 11:10:44 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:27.333 11:10:44 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:27.333 11:10:44 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2332760 00:32:27.333 11:10:44 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:27.333 11:10:44 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:27.333 11:10:44 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2332760' 00:32:27.333 killing process with pid 2332760 00:32:27.333 11:10:44 keyring_file -- common/autotest_common.sh@967 -- # kill 2332760 00:32:27.333 Received shutdown signal, test time was about 1.000000 seconds 00:32:27.333 00:32:27.333 Latency(us) 00:32:27.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:27.333 =================================================================================================================== 00:32:27.333 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:27.333 11:10:44 keyring_file -- common/autotest_common.sh@972 -- # wait 2332760 00:32:27.333 11:10:44 keyring_file -- keyring/file.sh@21 -- # killprocess 2330862 00:32:27.333 11:10:44 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2330862 ']' 00:32:27.333 11:10:44 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2330862 00:32:27.333 11:10:44 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:27.333 11:10:44 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:27.333 11:10:44 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2330862 00:32:27.333 11:10:44 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:27.333 11:10:44 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:27.333 11:10:44 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2330862' 00:32:27.333 killing process with pid 2330862 00:32:27.333 11:10:44 keyring_file -- common/autotest_common.sh@967 -- # kill 2330862 00:32:27.333 [2024-07-12 11:10:44.303258] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:27.333 11:10:44 keyring_file -- common/autotest_common.sh@972 -- # wait 2330862 00:32:27.593 00:32:27.593 real 0m11.351s 00:32:27.593 user 0m26.731s 00:32:27.593 sys 0m2.735s 00:32:27.593 11:10:44 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:27.593 11:10:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:27.593 ************************************ 00:32:27.593 END TEST keyring_file 00:32:27.593 ************************************ 00:32:27.593 11:10:44 -- common/autotest_common.sh@1142 -- # return 0 00:32:27.593 11:10:44 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:32:27.593 11:10:44 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:27.593 11:10:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:27.593 11:10:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:27.593 11:10:44 -- common/autotest_common.sh@10 -- # set +x 00:32:27.593 ************************************ 00:32:27.593 START TEST keyring_linux 00:32:27.593 ************************************ 00:32:27.593 11:10:44 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:27.855 * Looking for test storage... 00:32:27.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:27.855 11:10:44 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:27.855 11:10:44 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:27.855 11:10:44 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:32:27.855 11:10:44 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:27.855 11:10:44 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:27.855 11:10:44 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:27.855 11:10:44 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:27.855 11:10:44 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:27.855 11:10:44 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:27.855 11:10:44 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:27.855 11:10:44 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:27.855 11:10:44 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:27.855 11:10:44 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:27.855 11:10:44 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:27.855 11:10:44 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:27.855 11:10:44 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:27.855 11:10:44 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:27.855 11:10:44 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:27.855 11:10:44 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:27.855 11:10:44 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:27.855 11:10:44 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:27.855 11:10:44 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:27.855 11:10:44 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:27.855 11:10:44 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.855 11:10:44 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.855 11:10:44 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.855 11:10:44 keyring_linux -- paths/export.sh@5 -- # export PATH 00:32:27.855 11:10:44 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.855 11:10:44 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:32:27.855 11:10:44 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:27.855 11:10:44 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:27.855 11:10:44 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:27.856 11:10:44 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:27.856 11:10:44 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:27.856 11:10:44 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:27.856 11:10:44 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:27.856 11:10:44 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:27.856 11:10:44 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:27.856 11:10:44 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:27.856 11:10:44 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:27.856 11:10:44 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:32:27.856 11:10:44 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:32:27.856 11:10:44 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:32:27.856 11:10:44 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:32:27.856 11:10:44 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:27.856 11:10:44 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:32:27.856 11:10:44 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:27.856 11:10:44 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:27.856 11:10:44 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:32:27.856 11:10:44 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:27.856 11:10:44 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:27.856 11:10:44 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:27.856 11:10:44 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:27.856 11:10:44 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:27.856 11:10:44 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:27.856 11:10:44 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:27.856 11:10:44 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:32:27.856 11:10:44 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:32:27.856 /tmp/:spdk-test:key0 00:32:27.856 11:10:44 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:32:27.856 11:10:44 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:27.856 11:10:44 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:32:27.856 11:10:44 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:27.856 11:10:44 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:27.856 11:10:44 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:32:27.856 11:10:44 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:27.856 11:10:44 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:27.856 11:10:44 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:27.856 11:10:44 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:27.856 11:10:44 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:27.856 11:10:44 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:27.856 11:10:44 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:27.856 11:10:44 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:32:27.856 11:10:44 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:32:27.856 /tmp/:spdk-test:key1 00:32:27.856 11:10:44 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2333185 00:32:27.856 11:10:44 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2333185 00:32:27.856 11:10:44 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:27.856 11:10:44 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2333185 ']' 00:32:27.856 11:10:44 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:27.856 11:10:44 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:27.856 11:10:44 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:27.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:27.856 11:10:44 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:27.856 11:10:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:28.116 [2024-07-12 11:10:44.858210] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:32:28.116 [2024-07-12 11:10:44.858279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2333185 ] 00:32:28.116 EAL: No free 2048 kB hugepages reported on node 1 00:32:28.116 [2024-07-12 11:10:44.935691] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:28.116 [2024-07-12 11:10:44.997593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:28.688 11:10:45 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:28.688 11:10:45 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:32:28.688 11:10:45 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:32:28.688 11:10:45 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.688 11:10:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:28.688 [2024-07-12 11:10:45.610513] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:28.688 null0 00:32:28.688 [2024-07-12 11:10:45.642563] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:28.688 [2024-07-12 11:10:45.642920] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:28.688 11:10:45 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.688 11:10:45 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:32:28.688 353391810 00:32:28.688 11:10:45 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:32:28.688 223364207 00:32:28.949 11:10:45 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2333291 00:32:28.949 11:10:45 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2333291 /var/tmp/bperf.sock 00:32:28.949 11:10:45 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:32:28.949 11:10:45 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2333291 ']' 00:32:28.949 11:10:45 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:28.949 11:10:45 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:28.949 11:10:45 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:28.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:28.949 11:10:45 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:28.949 11:10:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:28.949 [2024-07-12 11:10:45.727176] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:32:28.949 [2024-07-12 11:10:45.727255] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2333291 ] 00:32:28.949 EAL: No free 2048 kB hugepages reported on node 1 00:32:28.949 [2024-07-12 11:10:45.801532] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:28.949 [2024-07-12 11:10:45.855372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:29.519 11:10:46 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:29.519 11:10:46 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:32:29.519 11:10:46 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:32:29.519 11:10:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:32:29.779 11:10:46 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:32:29.779 11:10:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:30.039 11:10:46 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:30.039 11:10:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:30.039 [2024-07-12 11:10:46.986034] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:30.299 nvme0n1 00:32:30.299 11:10:47 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:32:30.299 11:10:47 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:32:30.299 11:10:47 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:30.299 11:10:47 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:30.299 11:10:47 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:30.299 11:10:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:30.299 11:10:47 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:32:30.299 11:10:47 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:30.299 11:10:47 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:32:30.299 11:10:47 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:32:30.299 11:10:47 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:30.299 11:10:47 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:32:30.299 11:10:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:30.560 11:10:47 keyring_linux -- keyring/linux.sh@25 -- # sn=353391810 00:32:30.560 11:10:47 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:32:30.560 11:10:47 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:30.560 11:10:47 keyring_linux -- keyring/linux.sh@26 -- # [[ 353391810 == \3\5\3\3\9\1\8\1\0 ]] 00:32:30.560 11:10:47 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 353391810 00:32:30.560 11:10:47 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:32:30.560 11:10:47 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:30.560 Running I/O for 1 seconds... 00:32:31.945 00:32:31.945 Latency(us) 00:32:31.945 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:31.945 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:31.945 nvme0n1 : 1.01 12613.65 49.27 0.00 0.00 10107.76 7645.87 18677.76 00:32:31.945 =================================================================================================================== 00:32:31.945 Total : 12613.65 49.27 0.00 0.00 10107.76 7645.87 18677.76 00:32:31.945 0 00:32:31.945 11:10:48 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:31.945 11:10:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:31.945 11:10:48 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:32:31.945 11:10:48 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:32:31.945 11:10:48 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:31.945 11:10:48 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:31.945 11:10:48 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:31.945 11:10:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:31.945 11:10:48 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:32:31.945 11:10:48 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:31.945 11:10:48 keyring_linux -- keyring/linux.sh@23 -- # return 00:32:31.945 11:10:48 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:31.945 11:10:48 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:32:31.945 11:10:48 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:31.945 11:10:48 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:31.945 11:10:48 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:31.945 11:10:48 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:31.945 11:10:48 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:31.945 11:10:48 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:31.945 11:10:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:32.206 [2024-07-12 11:10:49.021813] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:32.206 [2024-07-12 11:10:49.022548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd7950 (107): Transport endpoint is not connected 00:32:32.206 [2024-07-12 11:10:49.023544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd7950 (9): Bad file descriptor 00:32:32.206 [2024-07-12 11:10:49.024546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:32.206 [2024-07-12 11:10:49.024554] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:32.206 [2024-07-12 11:10:49.024563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:32.206 request: 00:32:32.206 { 00:32:32.206 "name": "nvme0", 00:32:32.206 "trtype": "tcp", 00:32:32.206 "traddr": "127.0.0.1", 00:32:32.206 "adrfam": "ipv4", 00:32:32.206 "trsvcid": "4420", 00:32:32.206 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:32.206 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:32.206 "prchk_reftag": false, 00:32:32.206 "prchk_guard": false, 00:32:32.206 "hdgst": false, 00:32:32.206 "ddgst": false, 00:32:32.206 "psk": ":spdk-test:key1", 00:32:32.206 "method": "bdev_nvme_attach_controller", 00:32:32.206 "req_id": 1 00:32:32.206 } 00:32:32.206 Got JSON-RPC error response 00:32:32.206 response: 00:32:32.206 { 00:32:32.206 "code": -5, 00:32:32.206 "message": "Input/output error" 00:32:32.206 } 00:32:32.206 11:10:49 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:32:32.206 11:10:49 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:32.206 11:10:49 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:32.206 11:10:49 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:32.206 11:10:49 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:32:32.206 11:10:49 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:32.206 11:10:49 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:32:32.206 11:10:49 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:32:32.206 11:10:49 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:32:32.206 11:10:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:32.206 11:10:49 keyring_linux -- keyring/linux.sh@33 -- # sn=353391810 00:32:32.206 11:10:49 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 353391810 00:32:32.206 1 links removed 00:32:32.206 11:10:49 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:32.206 11:10:49 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:32:32.206 11:10:49 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:32:32.206 11:10:49 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:32:32.206 11:10:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:32:32.206 11:10:49 keyring_linux -- keyring/linux.sh@33 -- # sn=223364207 00:32:32.206 11:10:49 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 223364207 00:32:32.206 1 links removed 00:32:32.206 11:10:49 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2333291 00:32:32.206 11:10:49 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2333291 ']' 00:32:32.206 11:10:49 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2333291 00:32:32.206 11:10:49 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:32:32.206 11:10:49 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:32.206 11:10:49 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2333291 00:32:32.206 11:10:49 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:32.206 11:10:49 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:32.206 11:10:49 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2333291' 00:32:32.206 killing process with pid 2333291 00:32:32.206 11:10:49 keyring_linux -- common/autotest_common.sh@967 -- # kill 2333291 00:32:32.206 Received shutdown signal, test time was about 1.000000 seconds 00:32:32.206 00:32:32.206 Latency(us) 00:32:32.206 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:32.206 =================================================================================================================== 00:32:32.206 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:32.206 11:10:49 keyring_linux -- common/autotest_common.sh@972 -- # wait 2333291 00:32:32.467 11:10:49 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2333185 00:32:32.467 11:10:49 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2333185 ']' 00:32:32.467 11:10:49 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2333185 00:32:32.467 11:10:49 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:32:32.467 11:10:49 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:32.467 11:10:49 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2333185 00:32:32.467 11:10:49 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:32.467 11:10:49 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:32.467 11:10:49 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2333185' 00:32:32.467 killing process with pid 2333185 00:32:32.467 11:10:49 keyring_linux -- common/autotest_common.sh@967 -- # kill 2333185 00:32:32.467 11:10:49 keyring_linux -- common/autotest_common.sh@972 -- # wait 2333185 00:32:32.728 00:32:32.728 real 0m4.908s 00:32:32.728 user 0m8.648s 00:32:32.728 sys 0m1.305s 00:32:32.728 11:10:49 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:32.728 11:10:49 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:32.728 ************************************ 00:32:32.728 END TEST keyring_linux 00:32:32.728 ************************************ 00:32:32.728 11:10:49 -- common/autotest_common.sh@1142 -- # return 0 00:32:32.728 11:10:49 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:32:32.728 11:10:49 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:32:32.728 11:10:49 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:32:32.728 11:10:49 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:32:32.728 11:10:49 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:32:32.728 11:10:49 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:32:32.728 11:10:49 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:32:32.728 11:10:49 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:32:32.728 11:10:49 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:32:32.728 11:10:49 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:32:32.728 11:10:49 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:32:32.728 11:10:49 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:32:32.728 11:10:49 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:32:32.729 11:10:49 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:32:32.729 11:10:49 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:32:32.729 11:10:49 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:32:32.729 11:10:49 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:32:32.729 11:10:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:32.729 11:10:49 -- common/autotest_common.sh@10 -- # set +x 00:32:32.729 11:10:49 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:32:32.729 11:10:49 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:32:32.729 11:10:49 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:32:32.729 11:10:49 -- common/autotest_common.sh@10 -- # set +x 00:32:40.871 INFO: APP EXITING 00:32:40.871 INFO: killing all VMs 00:32:40.871 INFO: killing vhost app 00:32:40.871 WARN: no vhost pid file found 00:32:40.871 INFO: EXIT DONE 00:32:44.194 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:32:44.194 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:32:44.194 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:32:44.194 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:32:44.194 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:32:44.194 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:32:44.194 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:32:44.194 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:32:44.194 0000:65:00.0 (144d a80a): Already using the nvme driver 00:32:44.194 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:32:44.194 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:32:44.194 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:32:44.194 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:32:44.194 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:32:44.194 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:32:44.194 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:32:44.194 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:32:48.403 Cleaning 00:32:48.403 Removing: /var/run/dpdk/spdk0/config 00:32:48.403 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:48.403 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:48.403 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:48.403 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:48.403 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:48.403 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:48.403 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:48.403 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:48.403 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:48.403 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:48.403 Removing: /var/run/dpdk/spdk1/config 00:32:48.403 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:48.403 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:48.403 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:48.403 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:48.403 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:48.403 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:48.403 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:48.403 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:48.403 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:48.403 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:48.403 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:48.403 Removing: /var/run/dpdk/spdk2/config 00:32:48.403 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:48.403 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:48.403 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:48.403 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:48.403 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:48.403 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:48.403 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:48.403 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:48.403 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:48.403 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:48.403 Removing: /var/run/dpdk/spdk3/config 00:32:48.403 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:48.403 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:48.403 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:48.403 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:48.403 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:48.403 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:48.403 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:48.403 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:48.403 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:48.403 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:48.403 Removing: /var/run/dpdk/spdk4/config 00:32:48.403 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:48.403 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:48.403 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:48.403 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:48.403 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:48.403 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:48.403 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:48.403 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:48.403 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:48.403 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:48.403 Removing: /dev/shm/bdev_svc_trace.1 00:32:48.403 Removing: /dev/shm/nvmf_trace.0 00:32:48.403 Removing: /dev/shm/spdk_tgt_trace.pid1876233 00:32:48.403 Removing: /var/run/dpdk/spdk0 00:32:48.403 Removing: /var/run/dpdk/spdk1 00:32:48.403 Removing: /var/run/dpdk/spdk2 00:32:48.403 Removing: /var/run/dpdk/spdk3 00:32:48.404 Removing: /var/run/dpdk/spdk4 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1874753 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1876233 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1876809 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1877999 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1878150 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1879420 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1879543 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1879907 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1880808 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1881566 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1881948 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1882239 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1882528 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1882814 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1883173 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1883525 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1883810 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1884973 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1888383 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1888709 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1889000 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1889287 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1889666 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1889821 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1890349 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1890383 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1890745 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1890931 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1891122 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1891383 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1891893 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1892107 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1892371 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1892690 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1892735 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1893081 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1893263 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1893482 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1893837 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1894184 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1894539 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1894734 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1894933 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1895271 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1895629 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1895976 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1896179 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1896385 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1896717 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1897069 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1897417 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1897653 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1897847 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1898164 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1898582 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1898976 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1899043 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1899451 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1904367 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1958118 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1963548 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1975245 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1981684 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1986638 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1987337 00:32:48.404 Removing: /var/run/dpdk/spdk_pid1994765 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2002042 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2002045 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2003048 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2004055 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2005069 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2005737 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2005743 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2006078 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2006113 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2006221 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2007393 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2008857 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2009993 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2010629 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2010668 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2010983 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2012206 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2013508 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2023521 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2023872 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2028908 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2035751 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2038828 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2051194 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2062448 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2064456 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2065473 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2085757 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2090429 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2120794 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2126174 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2128135 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2130196 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2130538 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2130773 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2130902 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2131613 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2133946 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2135024 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2135432 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2138116 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2138821 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2139534 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2144583 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2157086 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2161913 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2169104 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2170600 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2172261 00:32:48.404 Removing: /var/run/dpdk/spdk_pid2177497 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2182243 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2191316 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2191322 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2196366 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2196692 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2196985 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2197372 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2197381 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2202906 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2203574 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2209303 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2212660 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2219033 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2225679 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2235484 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2244059 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2244113 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2266546 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2267375 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2268199 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2268897 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2269897 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2270648 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2271324 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2272010 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2277058 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2277395 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2284446 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2284801 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2287313 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2294748 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2294758 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2300787 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2303129 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2305321 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2306954 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2309713 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2311143 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2321021 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2321505 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2322098 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2325034 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2325702 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2326240 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2330862 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2330952 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2332760 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2333185 00:32:48.665 Removing: /var/run/dpdk/spdk_pid2333291 00:32:48.665 Clean 00:32:48.925 11:11:05 -- common/autotest_common.sh@1451 -- # return 0 00:32:48.925 11:11:05 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:32:48.925 11:11:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:48.925 11:11:05 -- common/autotest_common.sh@10 -- # set +x 00:32:48.925 11:11:05 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:32:48.925 11:11:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:48.925 11:11:05 -- common/autotest_common.sh@10 -- # set +x 00:32:48.925 11:11:05 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:48.925 11:11:05 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:48.925 11:11:05 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:48.925 11:11:05 -- spdk/autotest.sh@391 -- # hash lcov 00:32:48.925 11:11:05 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:48.925 11:11:05 -- spdk/autotest.sh@393 -- # hostname 00:32:48.925 11:11:05 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:49.186 geninfo: WARNING: invalid characters removed from testname! 00:33:15.829 11:11:30 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:16.399 11:11:33 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:17.781 11:11:34 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:19.690 11:11:36 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:21.073 11:11:37 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:22.984 11:11:39 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:24.369 11:11:41 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:24.369 11:11:41 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:24.369 11:11:41 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:24.369 11:11:41 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:24.369 11:11:41 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:24.369 11:11:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.369 11:11:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.369 11:11:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.369 11:11:41 -- paths/export.sh@5 -- $ export PATH 00:33:24.369 11:11:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.369 11:11:41 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:24.369 11:11:41 -- common/autobuild_common.sh@444 -- $ date +%s 00:33:24.369 11:11:41 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720775501.XXXXXX 00:33:24.369 11:11:41 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720775501.D2siw3 00:33:24.369 11:11:41 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:33:24.369 11:11:41 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:33:24.369 11:11:41 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:33:24.369 11:11:41 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:24.369 11:11:41 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:24.369 11:11:41 -- common/autobuild_common.sh@460 -- $ get_config_params 00:33:24.369 11:11:41 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:33:24.369 11:11:41 -- common/autotest_common.sh@10 -- $ set +x 00:33:24.369 11:11:41 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:33:24.369 11:11:41 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:33:24.369 11:11:41 -- pm/common@17 -- $ local monitor 00:33:24.369 11:11:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:24.369 11:11:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:24.369 11:11:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:24.369 11:11:41 -- pm/common@21 -- $ date +%s 00:33:24.369 11:11:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:24.369 11:11:41 -- pm/common@21 -- $ date +%s 00:33:24.369 11:11:41 -- pm/common@25 -- $ sleep 1 00:33:24.369 11:11:41 -- pm/common@21 -- $ date +%s 00:33:24.369 11:11:41 -- pm/common@21 -- $ date +%s 00:33:24.369 11:11:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720775501 00:33:24.369 11:11:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720775501 00:33:24.369 11:11:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720775501 00:33:24.369 11:11:41 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720775501 00:33:24.369 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720775501_collect-vmstat.pm.log 00:33:24.369 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720775501_collect-cpu-load.pm.log 00:33:24.369 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720775501_collect-cpu-temp.pm.log 00:33:24.369 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720775501_collect-bmc-pm.bmc.pm.log 00:33:25.316 11:11:42 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:33:25.316 11:11:42 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:33:25.316 11:11:42 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:25.316 11:11:42 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:25.316 11:11:42 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:25.316 11:11:42 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:25.317 11:11:42 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:25.317 11:11:42 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:25.317 11:11:42 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:25.317 11:11:42 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:25.317 11:11:42 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:25.317 11:11:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:25.317 11:11:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:25.317 11:11:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:25.317 11:11:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:33:25.317 11:11:42 -- pm/common@44 -- $ pid=2345719 00:33:25.317 11:11:42 -- pm/common@50 -- $ kill -TERM 2345719 00:33:25.317 11:11:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:25.317 11:11:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:33:25.317 11:11:42 -- pm/common@44 -- $ pid=2345720 00:33:25.317 11:11:42 -- pm/common@50 -- $ kill -TERM 2345720 00:33:25.317 11:11:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:25.317 11:11:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:33:25.317 11:11:42 -- pm/common@44 -- $ pid=2345722 00:33:25.317 11:11:42 -- pm/common@50 -- $ kill -TERM 2345722 00:33:25.317 11:11:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:25.317 11:11:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:33:25.317 11:11:42 -- pm/common@44 -- $ pid=2345745 00:33:25.317 11:11:42 -- pm/common@50 -- $ sudo -E kill -TERM 2345745 00:33:25.317 + [[ -n 1753691 ]] 00:33:25.317 + sudo kill 1753691 00:33:25.593 [Pipeline] } 00:33:25.609 [Pipeline] // stage 00:33:25.614 [Pipeline] } 00:33:25.629 [Pipeline] // timeout 00:33:25.636 [Pipeline] } 00:33:25.652 [Pipeline] // catchError 00:33:25.658 [Pipeline] } 00:33:25.672 [Pipeline] // wrap 00:33:25.676 [Pipeline] } 00:33:25.690 [Pipeline] // catchError 00:33:25.698 [Pipeline] stage 00:33:25.700 [Pipeline] { (Epilogue) 00:33:25.715 [Pipeline] catchError 00:33:25.716 [Pipeline] { 00:33:25.731 [Pipeline] echo 00:33:25.732 Cleanup processes 00:33:25.738 [Pipeline] sh 00:33:26.026 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:26.026 2345827 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:33:26.026 2346267 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:26.040 [Pipeline] sh 00:33:26.326 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:26.326 ++ grep -v 'sudo pgrep' 00:33:26.326 ++ awk '{print $1}' 00:33:26.326 + sudo kill -9 2345827 00:33:26.339 [Pipeline] sh 00:33:26.624 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:38.864 [Pipeline] sh 00:33:39.148 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:39.148 Artifacts sizes are good 00:33:39.162 [Pipeline] archiveArtifacts 00:33:39.170 Archiving artifacts 00:33:39.371 [Pipeline] sh 00:33:39.687 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:39.699 [Pipeline] cleanWs 00:33:39.709 [WS-CLEANUP] Deleting project workspace... 00:33:39.709 [WS-CLEANUP] Deferred wipeout is used... 00:33:39.716 [WS-CLEANUP] done 00:33:39.718 [Pipeline] } 00:33:39.740 [Pipeline] // catchError 00:33:39.754 [Pipeline] sh 00:33:40.042 + logger -p user.info -t JENKINS-CI 00:33:40.052 [Pipeline] } 00:33:40.069 [Pipeline] // stage 00:33:40.075 [Pipeline] } 00:33:40.091 [Pipeline] // node 00:33:40.096 [Pipeline] End of Pipeline 00:33:40.124 Finished: SUCCESS